Why Use Docker Locally?
"It works on my machine" is a symptom of poor environment isolation. When a developer joins a new project, they typically spend their first day configuring nvm to match the project's Node version, wrestling with PostgreSQL installations, and managing conflicting Redis instances on localhost.
Docker Compose eliminates this entirely. By defining your application stack (API + Database + Cache) inside a docker-compose.yml file, any developer can spin up the complete infrastructure with a single command: docker-compose up.
This guide will walk through building an advanced, production-adjacent local configuration for a Node.js backend using PostgreSQL.
Multi-Stage Dockerfiles
A very common mistake when creating Docker environments is using the same Dockerfile for local development and production. In development, you want hot-reloading (nodemon or ts-node-dev), you want dev dependencies, and you don't care about the image size. In production, you want a tiny, secure, compiled image.
We solve this using Multi-Stage Builds. Create a Dockerfile:
# -----------------------------
# Stage 1: Base Configuration
# -----------------------------
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./
# -----------------------------
# Stage 2: Development Environment
# -----------------------------
FROM base AS development
# Install ALL dependencies including devDependencies
RUN npm install
COPY . .
# We use nodemon for hot-reloading
CMD ["npm", "run", "dev"]
# -----------------------------
# Stage 3: Build & Production
# -----------------------------
FROM base AS builder
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine AS production
ENV NODE_ENV=production
WORKDIR /app
# Only copy the production dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy the compiled output from the builder stage
COPY --from=builder /app/dist ./dist
# Switch to a non-root user for security
USER node
CMD ["node", "dist/index.js"]
Because of this setup, our docker-compose.yml can explicitly target the development stage for our local environment.
Docker Compose Configuration
Now, we wire our backend API and our PostgreSQL database together. Create docker-compose.yml:
version: '3.8'
services:
# The Node.js Application
api:
build:
context: .
target: development # Targets the Dev stage of the Dockerfile
ports:
- "3000:3000"
- "9229:9229" # Port for step-through debugging
volumes:
- .:/app
- /app/node_modules # Prevents local node_modules from overwriting container's
environment:
- NODE_ENV=development
- PORT=3000
- DATABASE_URL=postgresql://oceanuser:oceanpass@db:5432/oceandb
depends_on:
db:
condition: service_healthy # Will not boot until DB is fully ready
# The Database
db:
image: postgres:15-alpine
restart: always
environment:
POSTGRES_USER: oceanuser
POSTGRES_PASSWORD: oceanpass
POSTGRES_DB: oceandb
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
# Automatically run scripts inside this folder on first initialization
- ./init-scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U oceanuser -d oceandb"]
interval: 5s
timeout: 5s
retries: 5
volumes:
db_data: # Creates a persistent docker volume
Vital Concepts Explained
Container Volumes (.:/app and /app/node_modules)
This is the heart of local development. volumes: - .:/app binds your laptop's current directory directly over the /app folder inside the container. When you type in VSCode, the file is instantly modified inside the container, triggering your nodemon process to restart.
However, the second volume rule (- /app/node_modules) is what we call an "anonymous volume." It protects the container's Linux-tailored node_modules from being crushed if you accidentally run npm install on your Mac/Windows host.
Healthchecks & Dependencies (depends_on: condition: service_healthy)
By default, Docker Compose starts containers simultaneously. This means your Node.js application will attempt to connect to Postgres while Postgres is still configuring its initial schemas, causing a crash loop.
By defining a healthcheck on the database (pg_isready), we tell Docker to wait precisely until Postgres finishes initializing before booting the Node API.
Advanced Patterns: Initializing the Database
When setting up locally, you usually need initial users, tables, and roles to test against. Notice the line - ./init-scripts:/docker-entrypoint-initdb.d.
Any .sql or .sh script placed in the init-scripts/ directory physically sitting next to your docker-compose.yml file will inherently execute the first time you spin up the database.
Example init-scripts/01-seed.sql:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
email VARCHAR(255) UNIQUE NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO users (username, email) VALUES
('admin', 'admin@oceansoftsol.com'),
('developer', 'dev@oceansoftsol.com');
Running It All
To bring your entire infrastructure to life, effectively standardizing the setup blockages for every new developer on your team, simply run:
docker-compose up -d
Run docker-compose logs -f api to view incoming request logs, and simply open your editor and start writing code.
OceanSoft Solutions builds heavily automated deployment pipelines and robust local Docker environments for optimal engineering velocity. Contact us to see how DevOps upgrades can slash bugs across your software lifecycle.