Comprehensive Guide to Kubernetes Installation and Configuration
This guide will walk you through the process of installing and configuring a Kubernetes cluster from scratch.
Prerequisites
Before starting, ensure all nodes meet the following requirements:
- 2 CPUs or more
- 2GB RAM or more
- Full network connectivity between all machines
- Unique hostname, MAC address, and product_uuid for each node
- Certain ports open on all nodes
- Swap disabled
Step 1: Prepare All Nodes
Run these commands on all nodes (both master and worker):
# Update the system
sudo apt update && sudo apt upgrade -y
# Install dependencies
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
# Add Kubernetes apt repository
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Disable swap
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Enable required kernel modules
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Configure sysctl settings for Kubernetes
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --systemStep 2: Install Container Runtime (containerd)
Run on all nodes:
# Install containerd
sudo apt update && sudo apt install -y containerd
# Configure containerd
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
# Update the configuration to enable SystemdCgroup
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
# Restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerdStep 3: Install Kubernetes Components
Run on all nodes:
# Install kubelet, kubeadm, and kubectl
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectlStep 4: Initialize the Control Plane (Master Node Only)
Run only on the master node:
# Initialize the Kubernetes control plane
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=stable
# Set up kubeconfig for the current user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configAfter successful initialization, you'll see a command to join worker nodes. Save this command.
Step 5: Install a Pod Network Add-on (Master Node Only)
We'll use Calico as the network plugin:
# Apply Calico network plugin
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
# Verify that all pods are running
kubectl get pods -n kube-systemStep 6: Join Worker Nodes
Run the join command (that you got from Step 4) on each worker node:
# Example (your token and hash will be different)
sudo kubeadm join 192.168.1.10:6443 --token abcdef.1234567890abcdef \
--discovery-token-ca-cert-hash sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdefIf you lost the join command, you can regenerate it on the master node:
# Generate a new token
kubeadm token create --print-join-commandStep 7: Verify Your Cluster
From the master node, check the status of your nodes:
kubectl get nodesAll nodes should show as "Ready":
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 10m v1.23.5
worker1 Ready <none> 5m v1.23.5
worker2 Ready <none> 5m v1.23.5Step 8: Deploy a Test Application
Let's deploy a simple web application to verify the cluster is working:
# Create a deployment
kubectl create deployment nginx --image=nginx
# Expose the deployment
kubectl expose deployment nginx --port=80 --type=NodePort
# Check the service
kubectl get svc nginxYou can access the NGINX web server using the NodePort assigned.
Common Issues and Troubleshooting
Pod Network Issues
If pods can't communicate, check your network configuration:
kubectl describe pod <pod-name>Node Status NotReady
Check kubelet status:
sudo systemctl status kubelet
sudo journalctl -u kubeletkubectl proxy --port=8001
curl http://localhost:8001/healthzAuthentication Issues
Verify the admin.conf file is correctly configured:
ls -la $HOME/.kube/configNext Steps
Now that your Kubernetes cluster is up and running, consider:
- Setting up a dashboard
- Configuring persistent storage
- Setting up monitoring with Prometheus and Grafana
- Implementing log aggregation
- Configuring Ingress controllers