My 10-Line GitHub Actions Workflow

My 10-Line GitHub Actions Workflow

CI doesn't have to be complicated. I've seen GitHub Actions workflows that are hundreds of lines long, with matrix builds and caching strategies and conditional steps and artifact uploads. For enterprise projects, maybe that's necessary.

For most Python projects? Here's what I actually use:

name: CI
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run ruff check . && uv run ruff format --check .
      - run: uv run pytest

That's it. Checkout, install dependencies, lint, format check, test. Done.

Why this works

With uv and ruff, everything is so fast that you don't need caching. Seriously. Dependencies install in seconds, linting takes milliseconds. The whole workflow typically runs in under a minute.

Compare that to traditional Python CI:

# The old way - don't do this
- uses: actions/setup-python@v4
  with:
    python-version: '3.12'
- uses: actions/cache@v3
  with:
    path: ~/.cache/pip
    key: ${{ runner.os }}-pip-${{ hashFiles('requirements.txt') }}
- run: pip install -r requirements.txt
- run: pip install black isort flake8 pytest
- run: black --check .
- run: isort --check .
- run: flake8 .
- run: pytest

More steps, more tools, more configuration, and it's slower. The cache helps but adds complexity. With uv, you just... don't need it.

Breaking down each step

actions/checkout@v4 - Clones your repo. Required for any CI.

astral-sh/setup-uv@v4 - Installs uv. This is the Astral team's official action. It handles caching uv itself and is very fast.

uv sync - Installs all dependencies from pyproject.toml and uv.lock. This typically takes 1-3 seconds for a project with 50+ dependencies. Compare that to pip which might take 30-60 seconds.

uv run ruff check . && uv run ruff format --check . - Lint and format check in one step. The && means the second command only runs if the first succeeds. Ruff is so fast (written in Rust) that both commands together take maybe 200ms.

uv run pytest - Run your tests. The uv run prefix ensures you're using the virtual environment's pytest, not a system-installed one.

Testing multiple Python versions

If you need to test against multiple Python versions, add a matrix:

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ['3.10', '3.11', '3.12']
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
        with:
          python-version: ${{ matrix.python-version }}
      - run: uv sync
      - run: uv run ruff check . && uv run ruff format --check .
      - run: uv run pytest

This runs the same steps for each Python version in parallel. The matrix makes this explicit and easy to maintain.

Adding type checking

If you use type hints (you should), add pyright:

- run: uv run ruff check . && uv run ruff format --check .
- run: uv run pyright .
- run: uv run pytest

Pyright is fast and catches real bugs. It's worth the extra few seconds.

Separate jobs for better visibility

If you want separate status checks for lint vs tests, split into jobs:

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run ruff check .
      - run: uv run ruff format --check .

  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run pytest

Jobs run in parallel by default, so this doesn't add time. You get separate green/red indicators in your PR.

The one thing you should definitely do

Turn on branch protection. Go to your repo settings, find the branch protection rules, and require status checks to pass before merging.

This one setting prevents more bugs than code review. Seriously.

It's not that code review is bad. It's just that humans are bad at catching the things that automated tests are good at catching:

  • Did you break an existing test? CI catches it.
  • Did you introduce a syntax error? CI catches it.
  • Did you forget to format? CI catches it.
  • Did you break type hints? CI catches it.

Let the robots do what robots are good at. Save human review for design decisions and architecture.

Running on PRs only to save minutes

GitHub gives you a limited number of free CI minutes. If you're pushing to branches frequently, you might want to only run CI on pull requests:

on:
  pull_request:
    branches: [main]
  push:
    branches: [main]  # Only run on direct pushes to main

This runs CI on all PRs and on pushes directly to main, but not on every push to feature branches.

Adding a badge to your README

Show off your passing tests:

![CI](https://github.com/username/repo/actions/workflows/ci.yml/badge.svg)

Replace username/repo with your actual repo. The badge updates automatically.

Debugging failed workflows

When CI fails, click through to see the logs. The output shows exactly what happened. Common issues:

  • Missing dependency: Add it to pyproject.toml
  • Import error: Usually means a dependency isn't installed
  • Test failure: Read the test output, fix the bug
  • Lint error: Run uv run ruff check --fix . locally

Pro tip: run the same commands locally before pushing. uv run pytest and uv run ruff check . should give you the same results as CI.

One thing that trips people up: GitHub Actions runs on a fresh VM every time. That means:

  • Your database isn't there (use SQLite for tests or spin up a service)
  • Environment files aren't there (use secrets or mock them)
  • Local config files aren't there (make tests self-contained)

If it works locally but fails in CI, you probably have a dependency on local state.

Working with secrets and environment variables

Most projects eventually need API keys or database URLs. Never commit these to your repo. Use GitHub Secrets instead.

Add secrets in your repo settings under Settings → Secrets and variables → Actions. Then reference them in your workflow:

- run: uv run pytest
  env:
    DATABASE_URL: ${{ secrets.DATABASE_URL }}
    API_KEY: ${{ secrets.API_KEY }}

The secrets are encrypted and only exposed to the workflow runner. They won't appear in logs (GitHub masks them automatically).

For non-sensitive config, use environment variables directly:

- run: uv run pytest
  env:
    ENVIRONMENT: ci
    DEBUG: false

One gotcha: if your code reads from a .env file, it won't exist in CI. Either:

  1. Create a .env.example and copy it in CI: - run: cp .env.example .env
  2. Use environment variables directly in your workflow
  3. Make .env optional and fall back to environment variables

I prefer option 3. Use python-dotenv but make it graceful:

from dotenv import load_dotenv
load_dotenv()  # Loads .env if it exists, silently does nothing if not

# Then just use os.getenv() everywhere
import os
database_url = os.getenv("DATABASE_URL", "sqlite:///test.db")

Adding code coverage

Code coverage tells you what percentage of your code is tested. It's a useful metric, but don't obsess over 100% coverage. Aim for 70-80% and focus on covering critical paths.

Add coverage to your workflow:

- run: uv run pytest --cov=. --cov-report=term-missing

This shows which lines aren't covered. For a nicer experience, integrate with Codecov:

- run: uv run pytest --cov=. --cov-report=xml
- uses: codecov/codecov-action@v3
  with:
    files: ./coverage.xml

Codecov gives you a web UI with trends and PR comments showing coverage changes. The free tier is generous for open source projects.

Add the coverage badge to your README:

[![codecov](https://codecov.io/gh/username/repo/branch/main/graph/badge.svg)](https://codecov.io/gh/username/repo)

Testing with databases and services

If your tests need a database or Redis or any other service, use GitHub Actions service containers:

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:15
        env:
          POSTGRES_PASSWORD: postgres
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
        ports:
          - 5432:5432
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run pytest
        env:
          DATABASE_URL: postgresql://postgres:postgres@localhost:5432/test

The service container runs alongside your workflow. It's destroyed when the job finishes, so every run is clean.

This works for Redis, MySQL, MongoDB, Elasticsearch, whatever. Just find the Docker image and configure it as a service.

Deploying automatically on successful CI

Once your tests pass, you might want to deploy automatically. Here's a simple pattern:

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run pytest

  deploy:
    needs: test  # Only run if test job succeeds
    if: github.ref == 'refs/heads/main'  # Only on main branch
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: ./deploy.sh
        env:
          DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}

The needs: test line ensures deploy only runs if tests pass. The if condition ensures you only deploy from the main branch, not from every PR.

Your deploy script might push to Heroku, deploy to AWS, build a Docker image, whatever. Keep the actual deployment logic in a script so you can test it locally.

Monorepo patterns

If you have multiple Python packages in one repo, you probably want to only run tests for changed packages. Use path filters:

on:
  push:
    paths:
      - 'packages/api/**'
      - '.github/workflows/api.yml'

This workflow only runs when files in packages/api/ change. Create separate workflows for each package.

Alternatively, use a matrix to test all packages:

jobs:
  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        package: [api, worker, shared]
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: cd packages/${{ matrix.package }} && uv sync
      - run: cd packages/${{ matrix.package }} && uv run pytest

This runs tests for all packages in parallel. It's great for catching cross-package issues.

Performance optimization tricks

Even though uv is fast, you can make workflows even faster:

1. Use --no-dev for deployment workflows

- run: uv sync --no-dev  # Skip dev dependencies

This installs only production dependencies, which is faster and uses less disk space.

2. Enable uv caching explicitly

The setup-uv action caches uv itself, but you can also cache the uv cache directory:

- uses: actions/cache@v3
  with:
    path: ~/.cache/uv
    key: ${{ runner.os }}-uv-${{ hashFiles('uv.lock') }}

This is rarely necessary since uv is already so fast, but for projects with 200+ dependencies it can shave off a few seconds.

3. Run independent jobs in parallel

Instead of running lint then test sequentially, run them in parallel:

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run ruff check .

  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run pytest

Both jobs run at the same time. Total wall-clock time is max(lint_time, test_time) instead of lint_time + test_time.

4. Fail fast in matrix builds

By default, if one matrix job fails, the others keep running. To stop immediately:

strategy:
  fail-fast: true
  matrix:
    python-version: ['3.10', '3.11', '3.12']

This saves minutes if you have a broken build that fails on all Python versions.

Common gotchas and how to fix them

Problem: Workflow runs on push to any branch, burning through free minutes.

Fix: Use path filters or limit to specific branches:

on:
  push:
    branches: [main]
  pull_request:

Problem: Tests pass locally but fail in CI with import errors.

Fix: You probably have a dev dependency that's not in pyproject.toml. Run uv sync in a fresh directory to reproduce.

Problem: Workflow times out after 6 hours.

Fix: You have an infinite loop or are waiting for user input. GitHub Actions has no TTY, so any input() call hangs forever.

Problem: Ruff or pytest can't find files.

Fix: Make sure you're running from the repo root. The working directory is set to the checkout directory by default, but if you cd somewhere, relative paths break.

Problem: Secrets aren't working.

Fix: Check the secret name matches exactly (case-sensitive). Also, secrets aren't available for pull requests from forks (security feature).

Pre-commit hooks vs GitHub Actions

Some people use pre-commit hooks to run linting before every commit. I don't. Here's why:

  • Hooks can be bypassed with --no-verify
  • Hooks run on one developer's machine, not everyone's
  • Hooks make commits slower and interrupt flow
  • CI catches the same issues and can't be bypassed

GitHub Actions are the source of truth. If you want to run checks locally, just run uv run ruff check . manually. It's fast enough that you don't need automation.

That said, if you work on a team where people regularly push broken code, pre-commit hooks can help. Just don't rely on them exclusively.

What about GitHub Actions alternatives?

GitLab CI, CircleCI, Travis CI, Jenkins, Buildkite... there are dozens of CI systems. I stick with GitHub Actions because:

  • It's already integrated with GitHub (no OAuth setup)
  • Free tier is generous (2000 minutes/month for private repos)
  • Configuration is simple (one YAML file)
  • Marketplace has tons of pre-built actions
  • Works the same way for public and private repos

Unless you have specific needs (like self-hosted runners or advanced caching), GitHub Actions is the right choice for GitHub projects.

Real-world example from my projects

Here's the actual workflow from one of my production projects:

name: CI
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: astral-sh/setup-uv@v4
      - run: uv sync
      - run: uv run ruff check .
      - run: uv run ruff format --check .
      - run: uv run pyright .
      - run: uv run pytest --cov=. --cov-report=xml
      - uses: codecov/codecov-action@v3

  deploy:
    needs: test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: superfly/flyctl-actions/setup-flyctl@master
      - run: flyctl deploy --remote-only
        env:
          FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}

This runs tests on every push and PR. If tests pass and we're on main, it deploys to Fly.io automatically. The whole process takes about 45 seconds.

No manual deployments, no "works on my machine" bugs making it to production, no waiting around. Push to main, grab coffee, come back to a deployed app.

The bottom line

Modern Python tooling (uv + ruff) makes CI fast and simple. You don't need complicated caching strategies or matrix builds for most projects.

Start with the simplest workflow that catches real problems. Add complexity only when you need it. For most projects, those 10 lines are all you need.

Once you've got basic CI working, you can add:
- Code coverage if you want metrics
- Multiple Python versions if you support them
- Deployment automation if you're shipping to production
- Service containers if you test against databases

But start simple. A working 10-line workflow is infinitely better than a perfect 200-line workflow you never finish setting up.

The goal isn't perfect CI. The goal is to catch bugs before they hit production, and even the simplest workflow does that.

Send a Message