Docker Essentials: From Zero to Containerized Apps
"It works on my machine" used to be a running joke at my previous company. Developers would spend hours debugging environment issues instead of writing code. Docker eliminated 90% of those problems.
What Docker Actually Does
Docker packages your application with all its dependencies into a standardized unit called a container. This container runs identically on any machine—your laptop, your colleague's Mac, or an AWS server.
Your First Dockerfile
Let's containerize a Node.js app:
# Use official Node image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy rest of the application
COPY . .
# Expose port
EXPOSE 3000
# Start the app
CMD ["node", "server.js"]
Build and run:
docker build -t myapp:1.0 .
docker run -p 3000:3000 myapp:1.0
Dockerfile Best Practices
1. Use Alpine Images
# Instead of: node:18 (900MB)
FROM node:18-alpine # (120MB)
2. Multi-Stage Builds
# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci && npm run build
# Production stage
FROM node:18-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
3. Don't Run as Root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
Essential Docker Commands
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# View logs
docker logs <container-id>
# Execute command inside container
docker exec -it <container-id> sh
# Stop and remove all containers
docker stop $(docker ps -q)
docker rm $(docker ps -a -q)
# Remove unused images
docker image prune -a
Docker Compose for Multi-Service Apps
Real applications have multiple services. Here's a typical setup:
# docker-compose.yml
version: '3.8'
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/myapp
depends_on:
- db
- redis
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=myapp
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
postgres_data:
redis_data:
Run everything:
docker-compose up -d
docker-compose logs -f api
docker-compose down
Debugging Docker Issues
Container exits immediately
# Check logs
docker logs <container-id>
# Run interactively to debug
docker run -it myapp:1.0 sh
Port already in use
# Find what's using the port
lsof -i :3000
# Or use different port mapping
docker run -p 3001:3000 myapp:1.0
Out of disk space
# Clean everything unused
docker system prune -a --volumes
Development Workflow
Use bind mounts for hot-reloading during development:
services:
api:
build: ./api
volumes:
- ./api:/app # Bind mount for live changes
- /app/node_modules # Preserve node_modules from image
command: npm run dev
Production Considerations
- Never store secrets in images - Use environment variables or secrets managers
- Tag images properly - Use semantic versioning, not just
latest - Scan for vulnerabilities -
docker scan myapp:1.0 - Limit resources -
docker run --memory=512m --cpus=0.5 myapp
Docker isn't just a deployment tool—it's a development tool. Once you get comfortable with it, you'll wonder how you ever worked without containers.