INFRASTRUCTURE2026-04-07📖 7 min read

Docker Compose for Beginners: Managing Multiple Containers with Ease

Docker Compose for Beginners: Managing Multiple Containers with Ease

A beginner-friendly guide covering Docker Compose fundamentals, docker-compose.yml syntax, and essential commands. Learn to manage multiple containers through practical code examples.

髙木 晃宏

代表 / エンジニア

👨‍💼

You've managed to run a single Docker container, but the moment you try combining a web server with a database, things get complicated—sound familiar? This article covers the fundamentals of Docker Compose, which dramatically simplifies multi-container management, with practical code examples throughout.

What Is Docker Compose?

Docker Compose is a tool for defining, launching, and managing multiple Docker containers together. Normally, running multiple containers in coordination requires executing docker run for each container and manually configuring networks and volumes.

When I first started, I was launching containers one by one, but as services grew to three, four, and beyond, managing startup order and network configuration became increasingly cumbersome. That friction was what led me to adopt Compose.

With Docker Compose, you can declaratively describe your entire service configuration in a single YAML file called docker-compose.yml (or simply compose.yml now). A single docker compose up brings up the entire environment, dramatically reducing development environment setup time.

Where Docker Compose Truly Shines

Docker Compose works in many scenarios, but it's especially powerful in the following cases:

  • Unifying team development environments: When a new member joins, writing just one line in the README—"run docker compose up -d"—completes the entire environment setup. No more battling OS differences or local tool version mismatches.
  • Microservices development: With architectures involving API Gateways, authentication services, main applications, and databases, managing containers individually isn't practical. Compose lets you manage the whole picture from a bird's-eye view.
  • Integration testing in CI/CD pipelines: You can simply automate the flow of spinning up databases and cache servers temporarily for test execution, then tearing everything down afterward.

The moment I felt Compose's benefits most strongly was when a project's development team grew from 3 to 8 members. Previously, we'd been sharing setup instructions with verbal supplements, but after adopting Compose, the YAML file itself conveyed the full picture of the environment, dramatically reducing onboarding time.

Comparing With and Without Docker Compose

The best way to appreciate Compose's convenience is to compare it with the alternative. Consider connecting a Next.js app with PostgreSQL.

Without Compose, you'd need to run these commands in sequence:

# 1. Create a dedicated network docker network create myapp-network # 2. Start the PostgreSQL container docker run -d \ --name myapp-db \ --network myapp-network \ -e POSTGRES_USER=user \ -e POSTGRES_PASSWORD=pass \ -e POSTGRES_DB=mydb \ -v myapp-db-data:/var/lib/postgresql/data \ -p 5432:5432 \ postgres:16 # 3. Manually verify DB readiness docker exec myapp-db pg_isready -U user # 4. Build and start the app container docker build -t myapp . docker run -d \ --name myapp-app \ --network myapp-network \ -e DATABASE_URL=postgres://user:pass@myapp-db:5432/mydb \ -v $(pwd):/app \ -p 3000:3000 \ myapp

Stopping and removing containers also requires individual operations:

docker stop myapp-app myapp-db docker rm myapp-app myapp-db docker network rm myapp-network

With Compose, this reduces to just docker compose up -d and docker compose down. The more services you add, the wider this gap becomes, and you no longer need to tell team members "run these commands in this order." The YAML file itself serves as documentation.

Docker Compose V1 vs. V2

Docker Compose has historically had two versions, V1 and V2. This is a source of confusion for beginners, so let's clarify.

AspectV1 (legacy)V2 (current)
Commanddocker-compose (hyphenated)docker compose (space-separated)
ImplementationPythonGo
InstallationSeparate installation requiredBundled with Docker Desktop / Docker CLI plugin
StatusEOL (end of life) since 2023Current recommended version

If you're using a current version of Docker Desktop, V2 is available by default. When browsing information online, if you see docker-compose (hyphenated), recognize it as V1 documentation. This article uses V2 syntax (docker compose) throughout.

The configuration file name has also been simplified from the traditional docker-compose.yml to compose.yml. Both names work, but for new projects, using compose.yml is the cleaner choice.

Basic Structure of compose.yml

Looking at an actual file is the most effective way to understand the structure. Here's a typical configuration combining a Next.js application with PostgreSQL.

services: app: build: . ports: - "3000:3000" environment: DATABASE_URL: postgres://user:pass@db:5432/mydb depends_on: db: condition: service_healthy volumes: - .:/app - node_modules:/app/node_modules db: image: postgres:16 environment: POSTGRES_USER: user POSTGRES_PASSWORD: pass POSTGRES_DB: mydb ports: - "5432:5432" volumes: - db_data:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U user"] interval: 5s timeout: 3s retries: 5 volumes: db_data: node_modules:

Here's a reference table summarizing the key fields:

KeyRoleExample
servicesTop-level key listing container definitionsservices:
imageSpecifies the Docker image to usepostgres:16
buildPath for building from Dockerfile. or ./app
portsHost-to-container port mapping"3000:3000"
volumesData persistence or host synchronizationdb_data:/var/lib/postgresql/data
environmentEnvironment variable configurationPOSTGRES_USER: user
depends_onControls service startup orderdepends_on: [db]
healthcheckContainer health check definitiontest: ["CMD", "pg_isready"]

Looking back, I initially assumed depends_on alone controlled startup order, but it only guarantees container start order—it doesn't wait for the application to be ready. Until I discovered the combination of condition: service_healthy with healthcheck, I struggled with connection errors.

Detailed Build Configuration

The earlier example used a simple build: ., but build settings can be specified in much greater detail. For monorepo structures or projects using multiple Dockerfiles, the following syntax is useful:

services: app: build: context: . dockerfile: docker/Dockerfile.dev args: NODE_VERSION: "20" target: development
KeyRole
contextBuild context (root path of files sent to the Docker daemon)
dockerfilePath to the Dockerfile to use (defaults to Dockerfile in context)
argsBuild-time arguments (corresponds to ARG in Dockerfile)
targetTarget stage name for multi-stage builds

target is incredibly useful when combined with multi-stage builds. For example, you can define development and production stages in a single Dockerfile and switch between them from compose.yml.

# Dockerfile FROM node:20-slim AS base WORKDIR /app COPY package*.json ./ FROM base AS development RUN npm install CMD ["npm", "run", "dev"] FROM base AS production RUN npm ci --only=production COPY . . RUN npm run build CMD ["npm", "start"]

Use target: development for development and target: production for production, letting you manage a single Dockerfile while building environment-appropriate images. I used to maintain separate Dockerfiles for development and production, which meant updating both every time something changed. Switching to multi-stage builds eliminated that problem and made maintenance much easier.

Environment Variables and .env Files

The earlier example wrote environment variables directly in compose.yml, but hardcoding sensitive information like passwords in YAML files isn't ideal. In real projects, it's common to manage these with .env files.

First, create a .env file in the project root.

# .env POSTGRES_USER=user POSTGRES_PASSWORD=s3cret_passw0rd POSTGRES_DB=mydb APP_PORT=3000

Compose.yml can reference these using the ${variable_name} variable expansion syntax.

services: app: build: . ports: - "${APP_PORT}:3000" environment: DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB} db: image: postgres:16 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB}

Always add .env to .gitignore to keep it out of version control. When sharing with your team, including a .env.example template with just variable names is a thoughtful touch.

# .env.example (include in repository with empty values) POSTGRES_USER= POSTGRES_PASSWORD= POSTGRES_DB= APP_PORT=3000

You can also use the env_file key to load different environment variable files per service.

services: app: build: . env_file: - .env - .env.app

In my experience, if you don't decide on your environment variable management approach early in the project, unifying it later becomes painful. I recommend building the habit of using .env files from the start, even for small projects.

How Networking Works

Docker Compose automatically creates a network that allows services within the same compose.yml to communicate with each other. This is why the earlier example could use db:5432 with the service name as a hostname.

By explicitly defining networks, you can achieve finer-grained control. For example, if you want the frontend to access the backend but not connect directly to the database, you can isolate networks:

services: frontend: build: ./frontend ports: - "3000:3000" networks: - front-tier backend: build: ./backend ports: - "8080:8080" networks: - front-tier - back-tier db: image: postgres:16 networks: - back-tier networks: front-tier: back-tier:

In this setup, frontend can communicate with backend but cannot directly reach db. Since backend belongs to both networks, it can receive requests from frontend while also accessing db. Being able to implement production-like security architecture during development is a unique advantage of Compose.

Volume Types and Their Use Cases

There are two main types of volumes specified in compose.yml. Misusing them can lead to data loss or performance degradation, so it's worth understanding them thoroughly.

Named Volumes

volumes: - db_data:/var/lib/postgresql/data

Data is stored in a persistent area managed by Docker Engine. Data persists even when containers are deleted, unless docker compose down -v is explicitly run. Ideal for database data and node_modules that should survive rebuilds.

Bind Mounts

volumes: - .:/app

Mounts a host machine directory into the container. File edits on the host are immediately reflected inside the container, making it perfect for syncing source code during development.

A pain point I encountered early on was handling node_modules. Bind-mounting the entire source code causes the host's node_modules to overwrite the container's, leading to native module incompatibilities. The standard technique is to isolate node_modules with a named volume, as shown earlier.

volumes: - .:/app # Source code via bind mount - node_modules:/app/node_modules # node_modules isolated with named volume

tmpfs Mounts

Less commonly discussed, tmpfs mounts are a third option. These are temporary areas mounted in the host's memory that disappear when the container stops.

services: app: build: . tmpfs: - /tmp - /app/.cache

Useful for temporary files during test execution or caches that don't need persistence and where you want to avoid disk I/O bottlenecks. For projects where test suite execution speed is a concern, simply redirecting temporary file output to tmpfs can noticeably improve perceived speed.

Practical Configuration: Three or More Services

Once you're comfortable with basic two-service configurations, try a setup closer to real projects. Here's a three-service example with Next.js + PostgreSQL + Redis. Redis is commonly used for session management and caching.

services: app: build: . ports: - "3000:3000" environment: DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB} REDIS_URL: redis://redis:6379 depends_on: db: condition: service_healthy redis: condition: service_started volumes: - .:/app - node_modules:/app/node_modules restart: unless-stopped db: image: postgres:16 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} ports: - "5432:5432" volumes: - db_data:/var/lib/postgresql/data - ./init.sql:/docker-entrypoint-initdb.d/init.sql healthcheck: test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"] interval: 5s timeout: 3s retries: 5 restart: unless-stopped redis: image: redis:7-alpine ports: - "6379:6379" volumes: - redis_data:/data command: redis-server --appendonly yes restart: unless-stopped volumes: db_data: node_modules: redis_data:

A few notable points:

Using restart: unless-stopped

During development, manually restarting containers after crashes is tedious. Setting a restart policy enables automatic restarts on unexpected crashes. The four options are:

PolicyBehavior
noNever restart (default)
alwaysAlways restart
on-failureRestart only on abnormal exit
unless-stoppedRestart unless manually stopped

For development environments, unless-stopped is the most practical. always will restart even after an intentional docker compose stop, which can be annoying.

Auto-executing Initialization SQL

The official PostgreSQL image automatically executes SQL files placed in /docker-entrypoint-initdb.d/ on first startup. This automates table creation and seed data insertion, making development environment setup even smoother.

volumes: - ./init.sql:/docker-entrypoint-initdb.d/init.sql

Note that this initialization only runs when the data volume is empty. If data already exists, it's skipped. To re-run initialization, delete volumes with docker compose down -v before restarting.

When placing multiple SQL files or shell scripts, they execute in alphabetical order by filename. To explicitly control execution order, add numbered prefixes to filenames.

volumes: - ./docker/initdb/01_create_tables.sql:/docker-entrypoint-initdb.d/01_create_tables.sql - ./docker/initdb/02_seed_data.sql:/docker-entrypoint-initdb.d/02_seed_data.sql - ./docker/initdb/03_create_indexes.sql:/docker-entrypoint-initdb.d/03_create_indexes.sql

Essential Commands to Remember

The commands you'll use daily are surprisingly few. Master the following and you'll be covered for basic operations.

# Start all services in the background docker compose up -d # Stop all services and remove containers docker compose down # Also remove volumes when stopping (includes DB data—use for dev resets) docker compose down -v # Follow logs for a specific service in real-time docker compose logs -f app # Rebuild and restart services docker compose up -d --build # List running containers docker compose ps # Execute a command inside a running container docker compose exec db psql -U user -d mydb

The distinction between docker compose down and docker compose down -v is especially important. Adding -v deletes volumes along with containers, which means DB data is lost too. Be careful not to accidentally run this when working with production data.

Useful Commands for Daily Development

Beyond the basics, here's a categorized collection of frequently used commands. Keep these bookmarked for reference.

Startup and Shutdown

# Start only a specific service (dependent services start automatically) docker compose up -d app # Stop services but keep containers (faster to resume) docker compose stop # Resume stopped services docker compose start # Restart a specific service only docker compose restart app # Delete everything including images, volumes, and networks (full reset) docker compose down -v --rmi all --remove-orphans

Logs and Debugging

# Show all service logs (last 100 lines) docker compose logs --tail=100 # Show logs for multiple services together docker compose logs -f app db # Open a shell inside a container (interactive debugging) docker compose exec app sh # Temporarily spin up a new container to run a command (no impact on existing containers) docker compose run --rm app npm test # Monitor container resource usage in real-time docker compose top

Build and Image Management

# Build from scratch without cache docker compose build --no-cache # Build a specific service only docker compose build app # List built images docker compose images # Show compose.yml configuration with variables expanded docker compose config

The last command, docker compose config, is often overlooked but invaluable for verifying that .env variables are expanding correctly. When "I set the environment variable but it's not taking effect," this command should be your first check.

Difference Between run and exec

These similar commands serve different purposes:

CommandBehaviorUse Case
docker compose execExecute command in a running containerDB connections, log inspection
docker compose runCreate a new container and execute commandRunning tests, migrations

run creates a new container each time, so always use the --rm flag to make it disposable. Forgetting this causes stopped containers to accumulate.

Common Troubleshooting

Here's a checklist of issues beginners commonly encounter. I've run into all of these myself.

  • Port conflicts: If you see Bind for 0.0.0.0:5432 failed: port is already allocated, check for processes using the same port with lsof -i :5432
  • Volume caching: If code changes aren't reflected, rebuild the image with docker compose up -d --build
  • Environment variables not applied: After changing environment variables in compose.yml, you need docker compose up -d to recreate containers. restart alone doesn't apply changes
  • Network name mismatches: For inter-service communication, use the service name from compose.yml as the hostname (e.g., db:5432), not the container name
  • Apple Silicon image compatibility: On M1/M2 Macs, you may need to add platform: linux/amd64

Detailed Troubleshooting Scenarios

Beyond the checklist above, here are cases I encountered that took significant time to resolve.

"Container is running but the app won't connect"

The container shows Up in docker compose ps, but you can't access it from the browser. The three most common causes:

  1. The app isn't listening on 0.0.0.0. Inside a container, localhost refers to the container itself, so the host machine can't reach it. For Node.js, add the HOST=0.0.0.0 environment variable.
  2. The port mapping is reversed. The format is "host:container".
  3. The app inside the container is still starting up. Check with docker compose logs -f app to see if the startup completion message has appeared.

"docker compose up hangs and never returns"

If the CMD or ENTRYPOINT in the Dockerfile doesn't start a foreground process, the container exits immediately and the restart policy triggers a start-stop loop. Check docker compose logs for errors and review the Dockerfile's startup command.

"Old containers remain and service names conflict"

When you change the project name or service names in compose.yml, old containers can become orphaned. Use this command to remove them:

docker compose down --remove-orphans

"Build fails due to insufficient disk space"

Over extended Docker Compose usage, unused images, volumes, and build caches accumulate and eat up disk space. When I first encountered the "No space left on device" error, it was a bit alarming. Clean up unused Docker resources with these commands:

# Remove unused images, containers, and networks docker system prune # Also remove volumes (caution: unused volume data will be deleted) docker system prune --volumes # Check current disk usage docker system df

Check current usage with docker system df before running prune for safety. Build cache in particular tends to grow surprisingly large, so periodic checks are worthwhile.

Separating Development and Production Environments

Docker Compose is primarily recommended for development and CI environments, but with thoughtful file organization, you can minimize differences between development and production.

Override with Multiple Compose Files

Docker Compose can merge multiple YAML files. This lets you separate base configuration from environment-specific differences.

# compose.yml (base configuration) services: app: build: . environment: DATABASE_URL: postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB} depends_on: db: condition: service_healthy db: image: postgres:16 environment: POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} POSTGRES_DB: ${POSTGRES_DB} healthcheck: test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"] interval: 5s timeout: 3s retries: 5
# compose.override.yml (for development—auto-loaded) services: app: ports: - "3000:3000" volumes: - .:/app - node_modules:/app/node_modules environment: NODE_ENV: development db: ports: - "5432:5432" volumes: - db_data:/var/lib/postgresql/data volumes: db_data: node_modules:
# compose.prod.yml (for production—explicitly specified) services: app: ports: - "80:3000" environment: NODE_ENV: production restart: always db: volumes: - db_data:/var/lib/postgresql/data restart: always volumes: db_data:

compose.override.yml has a designated filename and is automatically merged when running docker compose up. To use production settings, explicitly specify with the -f option.

# Development (compose.yml + compose.override.yml auto-merged) docker compose up -d # Production (compose.yml + compose.prod.yml explicitly specified) docker compose -f compose.yml -f compose.prod.yml up -d

This mechanism naturally enables workflows where development gets source code bind mounts and debug port exposure, while production strips unnecessary configuration.

Selective Service Startup with Profiles

When you have services only needed during development (like MailHog for email testing or pgAdmin for database management), the profiles feature is convenient.

services: app: build: . ports: - "3000:3000" db: image: postgres:16 pgadmin: image: dpage/pgadmin4 ports: - "8080:80" environment: PGADMIN_DEFAULT_EMAIL: admin@example.com PGADMIN_DEFAULT_PASSWORD: admin profiles: - debug mailhog: image: mailhog/mailhog ports: - "1025:1025" - "8025:8025" profiles: - debug

Services with profiles specified won't start during a normal docker compose up. Activate them only when needed by specifying the profile.

# Normal startup (app + db only) docker compose up -d # Include debug profile (app + db + pgadmin + mailhog) docker compose --profile debug up -d

Separating non-essential tools into profiles keeps regular startups lightweight and your compose.yml cleaner.

Building a Development Workflow with Compose

Combining the knowledge covered so far, you can build an efficient development workflow centered around Compose. Here's the workflow I actually use in real projects.

Combining with Makefile

docker compose commands tend to get long with options. For frequently used team commands, consolidating them in a Makefile saves typing effort.

.PHONY: up down restart logs db-console fresh # Start development environment up: docker compose up -d # Stop environment down: docker compose down # Rebuild and restart app only restart: docker compose up -d --build app # Follow app logs logs: docker compose logs -f app # Connect to DB console db-console: docker compose exec db psql -U $${POSTGRES_USER} -d $${POSTGRES_DB} # Full environment reset (delete volumes → rebuild → start) fresh: docker compose down -v docker compose up -d --build

Short commands like make up, make logs, and make fresh become available. For new members, "just run make up to start" is all you need to communicate, which also simplifies documentation.

I adopted this approach after repeated "what was that docker compose option again?" questions on the team. After centralizing in a Makefile, those inquiries virtually disappeared.

Hot Reloading with Watch Mode

Docker Compose V2.22 and later introduced a watch feature that detects file changes and automatically synchronizes or rebuilds containers.

services: app: build: . ports: - "3000:3000" develop: watch: - action: sync path: ./src target: /app/src - action: rebuild path: ./package.json
docker compose watch

action: sync immediately synchronizes file changes to the container, while action: rebuild automatically triggers an image rebuild when target files change. In the example above, source code changes are reflected in real-time, and rebuilds only trigger when package.json changes.

Bind mounts can achieve something similar, but watch mode is more resilient to filesystem differences (particularly permission differences between macOS and Linux) and can offer performance advantages. It's a relatively new feature, but it genuinely improves the development experience—give it a try.

Conclusion: Start Small and Learn Incrementally

Docker Compose is a powerful tool that manages multiple containers with just a declarative compose.yml and a handful of commands. Starting with a simple two-service configuration like the one in this article and gradually adding Redis, Nginx reverse proxy, and other services is the most effective learning path.

Here's a recommended step-by-step progression:

  1. Start with two services: Get comfortable with basic Compose operations using an app + DB setup
  2. Separate environment variables into .env: Learn the practices for secret management and team sharing
  3. Expand to three or more services: Add Redis, Nginx, etc. to deepen your understanding of networking and health checks
  4. Use override files for environment separation: Properly manage development and production configurations
  5. Streamline with Makefile and watch mode: Refine your daily development workflow

Once you're comfortable with Compose, every team member can instantly reproduce identical development environments, and "it works on my machine" problems decrease significantly. While every situation is different, the return on investment for development productivity is remarkably high.

If you need help with Docker Compose adoption or container-based development environment setup, feel free to contact us. We can provide tailored recommendations from infrastructure design to operations based on your team's specific situation.