Deploying Python Applications with Docker Swarm
This guide demonstrates how to deploy, scale, and manage Python applications using Docker Swarm—Docker's native orchestration tool.
1. Introduction to Docker Swarm
Docker Swarm is Docker's native clustering and orchestration solution that turns a group of Docker hosts into a single virtual server. It provides:
- High availability and fault tolerance
- Load balancing across containers
- Scaling capabilities
- Rolling updates and rollbacks
- Service discovery
- Secure communication with TLS
2. Setting Up a Docker Swarm Cluster
Initialize a Swarm Cluster
# On the manager node
docker swarm init --advertise-addr <MANAGER-IP>
# The command outputs a join token for worker nodes
# Example output:
# docker swarm join --token SWMTKN-1-49nj1cmql0... 192.168.99.100:2377Add Worker Nodes
# On each worker node, run the join command from the init output
docker swarm join --token <TOKEN> <MANAGER-IP>:2377Verify Nodes
# On the manager node
docker node ls3. Preparing a Python Application for Swarm
Sample Flask Application
Create a directory for your application:
mkdir -p flask-swarm-demo/app
cd flask-swarm-demoCreate a simple Flask app (app/app.py):
from flask import Flask
import socket
import os
app = Flask(__name__)
@app.route('/')
def hello():
hostname = socket.gethostname()
return f"Hello from Python container: {hostname} - Version: {os.environ.get('APP_VERSION', '1.0')}"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)Create requirements.txt
flask==2.3.3
gunicorn==21.2.0Create a Dockerfile
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Copy requirements and install dependencies
COPY ./app/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY ./app .
# Environment variables
ENV APP_VERSION=1.0
ENV PYTHONUNBUFFERED=1
# Expose port
EXPOSE 5000
# Run the application with Gunicorn
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app", "--workers", "2"]4. Creating and Testing the Docker Image
Build the Docker Image
# Build the image
docker build -t myapp:1.0 .
# Test locally
docker run -p 5000:5000 myapp:1.0Push to a Registry
Docker Swarm needs to pull images from a registry:
# Tag the image for Docker Hub or your private registry
docker tag myapp:1.0 yourusername/myapp:1.0
# Push to registry
docker push yourusername/myapp:1.05. Creating a Docker Compose File for Swarm
Create docker-compose.yml:
version: "3.8"
services:
web:
image: yourusername/myapp:1.0
ports:
- "5000:5000"
environment:
- APP_VERSION=1.0
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
order: start-first
restart_policy:
condition: on-failure
max_attempts: 3
resources:
limits:
cpus: "0.25"
memory: 256M
networks:
- web_network
redis:
image: redis:alpine
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
networks:
- web_network
networks:
web_network:
driver: overlay6. Deploying to Docker Swarm
Deploy the Stack
# On the manager node
docker stack deploy -c docker-compose.yml myappVerify Deployment
# List all stacks
docker stack ls
# List services in the stack
docker stack services myapp
# List tasks in the stack
docker stack ps myapp7. Scaling the Application
Scale Services
# Scale web service to 5 replicas
docker service scale myapp_web=5
# Verify scaling
docker service ps myapp_webConfigure Auto-scaling (with Docker Flow Swarm Listener)
For more advanced auto-scaling, you can use projects like Docker Flow Swarm Listener or integrate with monitoring solutions that trigger scaling through the Docker API.
8. Managing Secrets in Swarm
Create a Secret
# Create a secret from a file
echo "mysecretpassword" | docker secret create db_password -
# Create a secret from environment
docker secret create api_key - <<< "$API_KEY"Use Secrets in Compose
Update docker-compose.yml to use secrets:
version: "3.8"
services:
web:
image: yourusername/myapp:1.0
# ... other configurations
secrets:
- db_password
- api_key
environment:
- APP_VERSION=1.0
secrets:
db_password:
external: true
api_key:
external: trueAccess Secrets in Python App
def get_secret(secret_name):
try:
with open(f'/run/secrets/{secret_name}', 'r') as secret_file:
return secret_file.read().strip()
except IOError:
return None
# Use the secret
db_password = get_secret('db_password')9. Configuring Load Balancing and Service Discovery
Docker Swarm handles service discovery and load balancing automatically through its built-in DNS service and the routing mesh.
Internal Service Discovery
Services can communicate using service names:
import requests
# Connect to the redis service
redis_response = requests.get("http://redis:6379")External Load Balancing
For external load balancing, you can use:
- The built-in routing mesh (already set up)
- Traefik or Nginx as edge routers
Example with Traefik:
version: "3.8"
services:
traefik:
image: traefik:v2.9
command:
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
placement:
constraints: [node.role == manager]
networks:
- web_network
web:
image: yourusername/myapp:1.0
deploy:
replicas: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.myapp.rule=Host(`myapp.example.com`)"
- "traefik.http.services.myapp.loadbalancer.server.port=5000"
networks:
- web_network
networks:
web_network:
driver: overlay10. Updates and Rollbacks
Updating the Application
- Update your application code
- Build a new Docker image with a new tag
- Push the image to the registry
- Update the docker-compose.yml file with the new image tag
- Update the stack
# After updating docker-compose.yml with new image tag
docker stack deploy -c docker-compose.yml myappRolling Back
# Rollback to a previous image by updating docker-compose.yml
# Then re-deploy
docker stack deploy -c docker-compose.yml myapp11. Monitoring and Logging
Centralized Logging with ELK Stack
Add logging services to your docker-compose.yml:
version: "3.8"
services:
web:
# ... existing configuration
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Log collection stack
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
deploy:
resources:
limits:
memory: 1G
networks:
- web_network
logstash:
image: docker.elastic.co/logstash/logstash:7.17.0
depends_on:
- elasticsearch
networks:
- web_network
kibana:
image: docker.elastic.co/kibana/kibana:7.17.0
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- web_networkMonitoring with Prometheus and Grafana
Add monitoring services:
prometheus:
image: prom/prometheus:v2.42.0
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
deploy:
placement:
constraints: [node.role == manager]
networks:
- web_network
grafana:
image: grafana/grafana:9.3.6
ports:
- "3000:3000"
depends_on:
- prometheus
networks:
- web_network12. Best Practices
- Health Checks: Add health checks to your containers to enable proper orchestration:
services:
web:
# ... other configurations
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s- Use Configs for Configuration Files:
services:
web:
# ... other configurations
configs:
- source: app_config
target: /app/config.json
configs:
app_config:
file: ./configs/app_config.jsonResource Limits: Always set resource limits for your services
Security:
- Use secrets for sensitive information
- Apply the principle of least privilege
- Keep images minimal and up-to-date
Persistent Data: Use volumes for persistent data:
services:
db:
image: postgres:14
volumes:
- db_data:/var/lib/postgresql/data
# ... other configs
volumes:
db_data:13. Complete Example with Multi-Service Python Application
Here's a more complete example with a Flask app, Redis cache, PostgreSQL database, and monitoring:
version: "3.8"
services:
web:
image: yourusername/myapp:1.0
ports:
- "5000:5000"
environment:
- APP_VERSION=1.0
- REDIS_URL=redis://redis:6379/0
- DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
secrets:
- db_password
networks:
- web_network
depends_on:
- redis
- db
redis:
image: redis:alpine
networks:
- web_network
db:
image: postgres:14-alpine
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
- POSTGRES_DB=myapp
volumes:
- db_data:/var/lib/postgresql/data
deploy:
placement:
constraints: [node.role == manager]
secrets:
- db_password
networks:
- web_network
prometheus:
image: prom/prometheus:v2.42.0
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
networks:
- web_network
grafana:
image: grafana/grafana:9.3.6
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
networks:
- web_network
networks:
web_network:
driver: overlay
volumes:
db_data:
grafana_data:
secrets:
db_password:
external: trueConclusion
Docker Swarm provides a straightforward way to deploy and manage containerized Python applications at scale. By following this guide, you can create a resilient, scalable infrastructure for your Python applications with features like service discovery, load balancing, and centralized logging.
For large-scale production deployments, you might eventually consider Kubernetes, which offers more features but comes with increased complexity. However, Docker Swarm provides an excellent balance of power and simplicity for many use cases.