I Only Use Docker for Databases

I Only Use Docker for Databases

Hot take: I don't containerize my Python apps for local development. But I love Docker Compose for running databases and other services.

This might sound like heresy in the 'Docker all the things' era. But hear me out - there's a sweet spot where Docker adds value without adding friction.

Why not containerize everything?

Every time I try to fully containerize my dev environment, I run into the same issues:

Slow rebuilds when code changes. Even with volume mounts and clever layer caching, there's always some lag. Change a dependency? Rebuild the image. Change a config file? Sometimes rebuild. It adds up.

Debugger won't attach properly. Remote debugging into a container is possible but annoying. You have to expose ports, configure the debugger for remote connections, and it's never quite as smooth as local debugging.

File watching is flaky. Hot reload depends on file system events. Docker on macOS uses a virtualized file system, and sometimes those events don't propagate correctly. Your server doesn't reload, you make changes that don't appear, you waste 10 minutes figuring out what's wrong.

It's just more complicated. Another layer of abstraction. Another thing to debug when something goes wrong. Another thing new team members have to learn before they can be productive.

Running Python directly on my machine is faster and simpler. I have uv managing my environments, and it takes care of Python versions and dependencies automatically. Why add Docker on top of that?

But databases are different

I don't want to install PostgreSQL on my Mac. I don't want to manage versions, or worry about it running in the background eating memory, or deal with upgrades that might break something.

Same for Redis. Same for Elasticsearch. Same for any infrastructure service.

These are things I don't need to modify or debug. I just need them running with the right configuration. Docker is perfect for this.

My docker-compose.yml

Here's what I use for a typical project:

services:
  postgres:
    image: postgres:16
    ports:
      - '5432:5432'
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: dev
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U postgres']
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - '6379:6379'
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

The named volumes mean data persists between restarts. The healthcheck means I can wait for Postgres to be ready before running migrations.

The workflow

My daily development workflow looks like this:

# Start the services (detached mode)
docker compose up -d

# Check they're running
docker compose ps

# Run my app normally - no Docker involved
uv run uvicorn app.server:app --reload

# Run tests
uv run pytest

# When done for the day
docker compose down

The services start in seconds. My app runs natively with full hot reload and debugging support. Best of both worlds.

Adding more services

Need to add a service? Just add it to docker-compose.yml:

  mailhog:
    image: mailhog/mailhog
    ports:
      - '1025:1025'  # SMTP
      - '8025:8025'  # Web UI

  minio:
    image: minio/minio
    ports:
      - '9000:9000'
      - '9001:9001'
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    command: server /data --console-address ':9001'
    volumes:
      - minio_data:/data

MailHog for testing email locally. MinIO for S3-compatible object storage. Whatever you need.

Connecting from your app

Since the services expose ports on localhost, your app connects normally:

# In your settings/config
DATABASE_URL = 'postgresql://postgres:dev@localhost:5432/myapp'
REDIS_URL = 'redis://localhost:6379/0'

No special Docker networking. No service discovery. Just localhost and ports.

Running migrations

I usually add a script to wait for the database and run migrations:

#!/bin/bash
# scripts/setup-dev.sh

# Start services
docker compose up -d

# Wait for postgres to be healthy
echo 'Waiting for Postgres...'
until docker compose exec postgres pg_isready -U postgres; do
    sleep 1
done

# Run migrations
uv run alembic upgrade head

echo 'Ready to develop!'

When I do use Docker for the app

To be clear, I do containerize my apps - just not for local development.

For production deployment: Yes, absolutely. The app runs in a container with a locked-down environment.

For CI/CD: Yes. The tests run in containers for consistency.

For sharing with non-developers: If someone who doesn't have Python set up needs to run the app, Docker makes that easy.

The distinction is about development experience. When I'm actively writing code, I want the fastest possible feedback loop. Native Python gives me that.

The docker-compose.override.yml trick

If you have team members who prefer fully containerized development, you can support both approaches:

# docker-compose.yml - base services only
services:
  postgres:
    image: postgres:16
    ...

# docker-compose.override.yml - add the app if you want it
services:
  app:
    build: .
    ports:
      - '8000:8000'
    volumes:
      - .:/app
    depends_on:
      - postgres

Docker Compose automatically merges these files. People who want to run the app in Docker can use docker compose up. People who prefer native development can just run the database with docker compose up postgres.

Real-world gotchas and how to fix them

Here are problems I've actually encountered and the solutions that worked:

Port conflicts

Sometimes port 5432 is already taken. Maybe you installed Postgres ages ago and forgot about it. Check what's using a port:

# On macOS/Linux
lsof -i :5432

# On Windows
netstat -ano | findstr :5432

Two solutions: Kill the conflicting process, or just use a different port:

postgres:
  image: postgres:16
  ports:
    - '5433:5432'  # Map host 5433 to container 5432

Then update your DATABASE_URL to use localhost:5433.

The database won't start after a crash

Postgres didn't shut down cleanly and now the container won't start. Check the logs:

docker compose logs postgres

Usually you'll see something about lock files. The nuclear option:

# Stop everything
docker compose down

# Remove the volume (YOU WILL LOSE DATA)
docker volume rm myapp_postgres_data

# Start fresh
docker compose up -d

This is fine for dev. For data you care about, you should have a seed script anyway.

Migrations fail because tables already exist

You ran migrations, then blew away the database, then ran them again. Now Alembic thinks it's on revision X but the database is empty. Reset the migration state:

# Drop the database
docker compose exec postgres psql -U postgres -c 'DROP DATABASE myapp;'

# Recreate it
docker compose exec postgres psql -U postgres -c 'CREATE DATABASE myapp;'

# Run migrations from scratch
uv run alembic upgrade head

Better yet, create a reset script:

#!/bin/bash
# scripts/reset-db.sh

docker compose exec postgres psql -U postgres -c 'DROP DATABASE IF EXISTS myapp;'
docker compose exec postgres psql -U postgres -c 'CREATE DATABASE myapp;'
uv run alembic upgrade head
uv run python scripts/seed_data.py

Docker eating all your disk space

Docker images and volumes accumulate. Check your disk usage:

docker system df

Clean up occasionally:

# Remove stopped containers, unused networks, dangling images
docker system prune

# Also remove unused volumes (careful!)
docker system prune --volumes

I run this every few months when my disk gets full.

Performance tuning for local development

Default Postgres settings are conservative. For local dev, you can make it faster:

postgres:
  image: postgres:16
  ports:
    - '5432:5432'
  environment:
    POSTGRES_DB: myapp
    POSTGRES_USER: postgres
    POSTGRES_PASSWORD: dev
    # Performance tweaks for local dev
    POSTGRES_INITDB_ARGS: '-c shared_buffers=256MB -c fsync=off -c synchronous_commit=off -c full_page_writes=off'
  volumes:
    - postgres_data:/var/lib/postgresql/data
  command: postgres -c shared_buffers=256MB -c fsync=off

WARNING: These settings sacrifice durability for speed. Never use them in production. But for local dev where you can regenerate the database anytime? They make a noticeable difference.

Redis is already fast, but you can tune it too:

redis:
  image: redis:7-alpine
  ports:
    - '6379:6379'
  command: redis-server --save "" --appendonly no
  volumes:
    - redis_data:/data

This disables persistence entirely. Again, fine for dev since you're just using Redis as a cache or session store.

Multiple projects, multiple databases

I work on several projects. Each needs its own database. The trick is to use different compose project names:

# In project A
cd ~/projects/projectA
docker compose -p projecta up -d

# In project B
cd ~/projects/projectB
docker compose -p projectb up -d

The -p flag sets the project name, so Docker creates separate networks and volumes.

Even easier, Docker Compose uses the directory name by default. So if your projects are in different folders, they automatically get separate databases.

But what about port conflicts? You can't have two containers both exposing port 5432. Here's my solution:

# projectA/docker-compose.yml
services:
  postgres:
    image: postgres:16
    ports:
      - '5432:5432'
    # ...

# projectB/docker-compose.yml
services:
  postgres:
    image: postgres:16
    ports:
      - '5433:5432'
    # ...

Each project uses a different host port. Adjust your DATABASE_URL accordingly.

Testing strategies with Docker databases

My test suite needs a clean database. Here's the pattern I use:

# conftest.py
import pytest
from sqlalchemy import create_engine
from app.database import Base

@pytest.fixture(scope='session')
def db_engine():
    \"\"\"Create engine once for the entire test session.\"\"\"
    engine = create_engine('postgresql://postgres:dev@localhost:5432/myapp_test')
    return engine

@pytest.fixture(scope='function')
def db_session(db_engine):
    \"\"\"Create a fresh database session for each test.\"\"\"
    # Create all tables
    Base.metadata.create_all(db_engine)

    connection = db_engine.connect()
    transaction = connection.begin()

    session = Session(bind=connection)

    yield session

    session.close()
    transaction.rollback()
    connection.close()

Each test gets a clean slate via rollback. Fast and isolated.

For integration tests that need real data, I use a seed script:

# tests/seed.py
def seed_test_data(session):
    \"\"\"Create minimal test data.\"\"\"
    user = User(email='test@example.com', name='Test User')
    session.add(user)
    session.commit()
    return user

Then in tests:

def test_user_profile(db_session):
    user = seed_test_data(db_session)
    response = client.get(f'/users/{user.id}')
    assert response.status_code == 200

Advanced: Multiple database versions

Sometimes you need to test against different Postgres versions. Here's a compose file that runs them side-by-side:

services:
  postgres-14:
    image: postgres:14
    ports:
      - '5414:5432'
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: dev
    volumes:
      - postgres_14_data:/var/lib/postgresql/data

  postgres-15:
    image: postgres:15
    ports:
      - '5415:5432'
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: dev
    volumes:
      - postgres_15_data:/var/lib/postgresql/data

  postgres-16:
    image: postgres:16
    ports:
      - '5416:5432'
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: dev
    volumes:
      - postgres_16_data:/var/lib/postgresql/data

volumes:
  postgres_14_data:
  postgres_15_data:
  postgres_16_data:

Now you can test against any version by changing the port in your DATABASE_URL. This is great for verifying compatibility before upgrading production.

When this approach doesn't work

Let me be honest about the limitations:

Complex multi-service architectures. If your app depends on 15 different microservices, running them all in Docker makes more sense than trying to run them natively.

Windows development. Python tooling on Windows is getting better, but it's still rougher than macOS/Linux. Docker provides consistency across platforms.

Team standardization. If your team has already standardized on Docker for everything and it's working, don't rock the boat. Consistency has value.

Debugging database internals. If you're working on database extensions or need to debug Postgres itself, you probably want it running natively. But that's a niche use case.

Resource constraints. Running Docker on an older machine with limited RAM might be slower than native. But modern machines handle it fine.

The key is recognizing that there's no one-size-fits-all answer. This approach works for me. It might not work for you. And that's okay.

Useful Docker Compose commands

Here are commands I use regularly:

# Start services in background
docker compose up -d

# View logs
docker compose logs
docker compose logs -f postgres  # Follow logs for one service

# Check status
docker compose ps

# Restart a service
docker compose restart postgres

# Stop everything
docker compose down

# Stop and remove volumes (nuclear option)
docker compose down -v

# Run a command in a service
docker compose exec postgres psql -U postgres

# See resource usage
docker stats

Environment-specific configuration

You might want different settings for different developers. Use an .env file:

# .env (gitignored)
POSTGRES_PORT=5432
REDIS_PORT=6379
POSTGRES_PASSWORD=dev

Then reference it in docker-compose.yml:

services:
  postgres:
    image: postgres:16
    ports:
      - '${POSTGRES_PORT}:5432'
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

Each developer can customize their .env without conflicts. Commit a .env.example with sensible defaults:

# .env.example
POSTGRES_PORT=5432
REDIS_PORT=6379
POSTGRES_PASSWORD=dev

The bottom line

Docker is great for running services you don't need to modify - databases, caches, message queues. But for the code you're actively developing, native execution is usually better.

This isn't about being anti-Docker or anti-container. It's about using the right tool for the right job. Containers are amazing for deployment, consistency, and isolation. But for the tight feedback loop of active development, native execution often wins.

Your mileage may vary. Maybe you love developing in containers. Maybe your team requires it. Maybe you work on Windows and Docker provides essential consistency. All valid.

The point is to question assumptions. Just because everyone says "containerize everything" doesn't mean it's always the best choice. Think about what you're optimizing for.

For me, that's Docker for services and native Python for everything else. Fast startup, easy debugging, simple workflow. It works.

Find the setup that works for you.

Send a Message