Print Debugging Is Fine

Print Debugging Is Fine

Yes, there are debuggers. Yes, there's logging. Yes, there are better tools. But sometimes you just need to slap a print statement in there.

I'm not ashamed to admit it. After a decade of programming, I still reach for print() as my first debugging tool. And I don't think that's a problem.

Why print() works

It's zero friction. No setup, no imports (well, it's a builtin), no configuration. You type print(thing), you see thing. Done.

When I'm hunting a bug, I want to understand what's happening as fast as possible. I don't want to think about debugger configurations or log levels. I want to see what value this variable has right now.

Print debugging has another huge advantage: it works everywhere. Docker containers, remote servers, lambda functions, CI/CD pipelines. You don't need IDE support or port forwarding or any special setup. If you can see stdout, you can debug with print.

The anatomy of a print debugging session

Here's what a real debugging session looks like for me. I get a bug report: "User checkout isn't calculating tax correctly."

First, I find the checkout function and add a print:

def calculate_checkout(cart, user):
    print(f'DEBUG: {cart=} {user=}')
    subtotal = sum(item.price for item in cart.items)
    tax = calculate_tax(subtotal, user.location)
    return subtotal + tax

Run it. Oh, the cart looks fine but user.location is None. Why?

def get_user_location(user):
    print(f'DEBUG: checking location for {user.id=}')
    if user.shipping_address:
        print(f'DEBUG: found shipping address: {user.shipping_address}')
        return user.shipping_address.state
    print(f'DEBUG: no shipping address, trying billing')
    return user.billing_address.state  # This throws AttributeError if billing_address is None

Aha! Users without a billing address crash before we even check. That's the bug. Two print statements, bug found in 30 seconds.

This is print debugging in practice. Quick, dirty, effective.

The f-string trick

Python 3.8 added this and it's amazing:

x = calculate_thing()
print(f'{x=}')  # Prints: x=42

The = inside the f-string prints both the variable name and value. This is so much better than print('x:', x) because you can't accidentally mismatch the label and the variable.

You can also format the output:

price = 19.99
print(f'{price=:.2f}')  # price=19.99

items = [1, 2, 3]
print(f'{items=!r}')  # items=[1, 2, 3]

Print multiple things at once

def process(user_id, amount):
    print(f'{user_id=} {amount=}')  # user_id=123 amount=45.67
    ...

You can chain as many as you want in one statement. This is especially useful at function boundaries:

def transfer_funds(from_account, to_account, amount):
    print(f'{from_account=} {to_account=} {amount=}')
    # See everything that matters in one line

Common print debugging patterns

Over the years, I've developed some go-to patterns that show up in almost every debugging session.

The "did this even run?" check

def rarely_called_function():
    print('XXX: rarely_called_function STARTED')
    # ... rest of function

Sometimes you just need to know if a code path executes. Don't overthink it.

The before/after pattern

def transform_data(data):
    print(f'BEFORE: {data=}')
    result = complicated_transformation(data)
    print(f'AFTER: {result=}')
    return result

Great for tracking down where data gets corrupted.

The enumerate pattern

If you must print in a loop (which you usually shouldn't), at least make it useful:

for i, item in enumerate(items):
    if i % 100 == 0:  # Print every 100th item
        print(f'Progress: {i}/{len(items)} - {item=}')

Or only print when something interesting happens:

for item in items:
    if item.status == 'weird_edge_case':
        print(f'DEBUG: Found weird case: {item=}')

The call stack trick

Want to know how you got here?

import traceback
print(''.join(traceback.format_stack()))

This prints the full call stack. Useful when a function is being called from multiple places and you need to know which path triggered the bug.

The type checking pattern

Sometimes the bug is that you're getting the wrong type:

def process(data):
    print(f'DEBUG: {type(data)=} {data=}')
    # Is it a dict? A list? A string? Find out!

I've spent embarrassing amounts of time debugging functions that expected a list but got a generator, or expected a dict but got a None. One print statement would have saved me hours.

Temporary prints with a marker

When I add debugging prints, I always include something searchable so I can find and remove them later:

print(f'DEBUG: {user=}')  # Easy to grep for 'DEBUG:'
print(f'XXX: {response=}')  # Or use XXX

Before committing, I run git diff | grep DEBUG to make sure I didn't leave any stray prints behind. Or just grep -r 'XXX:' . to find them all.

Pro tip: Use something obnoxious like 'XXX' or 'DELETEME' so you'll never accidentally commit it. I've definitely shipped code with print(f'user: {user}') before because it looked too much like real logging.

Print debugging gotchas

There are some traps that will waste your time if you're not careful.

The mutable object trap

data = {'count': 0}
print(f'{data=}')
data['count'] += 1
# Later you look at the printed output and it shows count=1
# Wait, but I printed it before the increment!

This doesn't actually happen with print, but it does with some logging systems that lazily evaluate. With print, you're safe. But if you switch to logging, be aware.

The buffering problem

Sometimes your print statement executes but you don't see the output before your program crashes:

print('About to do dangerous thing')
crash_the_program()  # You never see the print!

This is because stdout is buffered. Fix it with flush:

print('About to do dangerous thing', flush=True)
crash_the_program()  # Now you'll see it

Or use stderr instead:

import sys
print('DEBUG:', user, file=sys.stderr)

stderr is unbuffered by default, so you'll always see the output immediately.

The Unicode problem

user = User(name="José")
print(f'{user.name=}')  # Works fine in your terminal

But if you're piping output or running in a weird environment, you might get UnicodeEncodeError. Quick fix:

print(f'{user.name=}'.encode('utf-8', errors='replace'))

Or just:

print(ascii(user.name))  # Escapes non-ASCII characters

The massive object problem

Don't print objects that have massive string representations:

result = api.get_all_users()  # Returns 10,000 users
print(f'{result=}')  # Your terminal explodes

Instead:

print(f'{len(result)=}')  # result=10000
print(f'{result[:3]=}')  # First 3 items
print(f'{type(result)=}')  # Just the type

When to use a real debugger

When you don't know where the bug is. If I'm adding 10+ print statements and still can't figure out what's happening, it's time to stop and use a debugger.

Setting a breakpoint and stepping through code is faster than adding print statements everywhere:

def suspicious_function(data):
    breakpoint()  # Drops into pdb
    result = process(data)
    return result

When you hit the breakpoint, you're in an interactive shell. You can inspect any variable, call functions, whatever you need.

pdb survival guide

The basic pdb commands:

n       - next line (step over)
s       - step into function
c       - continue until next breakpoint
p expr  - print expression
pp expr - pretty-print expression
l       - show source code around current line
ll      - show entire function source
w       - where am I? (stack trace)
u       - move up stack frame
d       - move down stack frame
q       - quit debugger

That's 90% of what you need. There's more, but these will get you through most debugging sessions.

A real example. You're in pdb and want to understand what's happening:

(Pdb) l          # Show me the code
(Pdb) p user     # What is user?
(Pdb) p user.email  # What's the email?
(Pdb) pp user.__dict__  # Show all attributes
(Pdb) w          # How did I get here?
(Pdb) u          # Go up to caller
(Pdb) p local_var  # Check caller's variables

You can also run arbitrary Python code:

(Pdb) [x for x in items if x.status == 'active']
(Pdb) sum(item.price for item in cart)

This is incredibly powerful. You're not just inspecting state, you're running code in the exact context where the bug is happening.

Try ipdb for a better experience

uv add --dev ipdb

Then use import ipdb; ipdb.set_trace() instead of breakpoint(). You get syntax highlighting, tab completion, and a generally less hostile interface than vanilla pdb.

icecream for fancy prints

from icecream import ic

ic(x)  # ic| x: 42
ic(user.name, user.email)  # ic| user.name: 'Ben', user.email: 'ben@..'

icecream shows you where it was called from and formats the output nicely. Useful when you have print statements all over the place and can't remember which is which.

It also works as a drop-in replacement for print:

ic()  # Just prints the filename and line number

This is surprisingly useful for 'did this code path execute?' questions.

Rich library for beautiful output

If you want print debugging with actual formatting and colors, Rich is fantastic:

uv add rich
from rich.console import Console
console = Console()

console.print(data, style="bold red")
console.print("[blue]Processing user[/blue]", user_id)

But where Rich really shines is with complex data structures:

from rich import print as rprint

# Automatically formats nested dicts, lists, etc.
rprint({"user": {"name": "Ben", "roles": ["admin", "dev"]}})

The output is syntax highlighted and properly indented. If you're debugging API responses or configuration objects, Rich makes it way easier to spot issues in nested data.

Rich also has a console.log() method that automatically adds timestamps:

console.log("Starting process")  # [10:23:45] Starting process

It's still print debugging, just prettier.

logging.debug as a middle ground

Sometimes you want something between print statements and full debugger sessions. Python's logging module is perfect for this:

import logging
logging.basicConfig(level=logging.DEBUG)

def calculate(x, y):
    logging.debug(f'calculate called with {x=} {y=}')
    result = x * y
    logging.debug(f'returning {result=}')
    return result

The nice thing about logging is you can leave it in the code. Set the level to INFO in production, DEBUG when you're hunting bugs. No need to add and remove print statements.

You can also configure different loggers for different modules:

logger = logging.getLogger(__name__)
logger.debug(f'Processing {item=}')

This way you can turn on verbose logging for just the module you're debugging, without flooding your terminal with every log message from every library.

Here's a more complete logging setup I use:

import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    datefmt='%H:%M:%S'
)

logger = logging.getLogger(__name__)
logger.debug(f'Starting process with {config=}')

The format string adds timestamps and context. When you're debugging, seeing when things happened is incredibly useful.

Advanced print debugging tricks

Once you're comfortable with the basics, here are some power-user techniques.

Conditional debugging

DEBUG = True  # Toggle this when debugging

def process(data):
    if DEBUG:
        print(f'{data=}')
    # ... rest of function

Or use an environment variable:

import os
DEBUG = os.getenv('DEBUG', False)

if DEBUG:
    print(f'Processing {item=}')

Now you can turn debugging on and off without changing code: DEBUG=1 python app.py

The context manager approach

If you're debugging performance or want to measure timing:

import time
from contextlib import contextmanager

@contextmanager
def debug_timer(name):
    start = time.time()
    print(f'[{name}] Starting...')
    try:
        yield
    finally:
        elapsed = time.time() - start
        print(f'[{name}] Took {elapsed:.3f}s')

with debug_timer('database query'):
    results = db.query(sql)

This prints how long the code block took. Great for finding performance bottlenecks.

Print to a file instead of stdout

When you're generating tons of debug output:

with open('/tmp/debug.log', 'a') as f:
    print(f'DEBUG: {data=}', file=f)

Now your debug output doesn't pollute your application's actual output. You can tail -f /tmp/debug.log in another terminal to watch it.

JSON dumps for API debugging

When debugging API responses or complex nested data:

import json
response = api.get_user(123)
print(json.dumps(response, indent=2))

The indent=2 makes it human-readable. Much easier to spot issues in nested structures.

When NOT to use print debugging

  • Production code: Use proper logging with levels and handlers, not print statements
  • Race conditions: Printing can change timing enough to hide bugs in concurrent code
  • Large data: Print a summary or sample, not the whole thing. Your terminal has limits
  • Loops: Don't print on every iteration, you'll flood your terminal and slow everything down
  • When you need history: Print statements disappear when your program ends. Use logging for persistent debugging
  • Multi-threaded code: Print output from different threads gets interleaved in unreadable ways

Real-world debugging war stories

Let me share a few times print debugging saved me (and one time it didn't).

The disappearing database rows

A user reported that their data was randomly disappearing. I added a print statement right before the delete operation:

def cleanup_old_records():
    cutoff = datetime.now() - timedelta(days=30)
    print(f'XXX: Deleting records older than {cutoff=}')
    deleted = Record.objects.filter(created_at__lt=cutoff).delete()
    print(f'XXX: Deleted {deleted} records')

Turns out the timezone-naive datetime was being compared to timezone-aware database timestamps. In certain timezones, "30 days ago" was being interpreted as "30 days from now" and we were deleting everything. One print statement showed me the cutoff date was in the future.

The case of the wrong API response

API was returning 500 errors randomly. Added prints to see what was being sent:

def call_api(endpoint, data):
    print(f'XXX: Calling {endpoint=} with {data=}')
    response = requests.post(endpoint, json=data)
    print(f'XXX: Got {response.status_code=}')
    return response

Discovered that sometimes data was a string instead of a dict. The API rightfully rejected invalid JSON. The bug was in how we were building the request data, not in the API call itself.

When print debugging failed me

I was debugging a memory leak. Added prints everywhere. They didn't help. The problem was that the print statements themselves were holding references to objects, making the leak worse!

This is when I learned to use memory profilers and proper debugging tools. Print debugging has limits. Knowing when you've hit those limits is important.

Team debugging etiquette

If you're working on a team, some guidelines:

Don't commit print statements. Use a pre-commit hook to catch them, or just search your diff before committing: git diff | grep DEBUG

Use logging for shared code. If multiple people work on a module, use logging.debug() instead of print. It's more professional and can be controlled globally.

Clean up after yourself. Nothing is more annoying than pulling changes and seeing someone else's print("HERE") and print("WTF") scattered everywhere.

The bottom line

There's no shame in print debugging. Use the right tool for the job, and sometimes the right tool is the simplest one. Just remember to clean up before you commit.

Print debugging is like duct tape. It's not elegant, it's not sophisticated, but it works. Every senior engineer I know still uses it regularly. The difference between a junior and senior engineer isn't whether they use print statements - it's knowing when to use them and when to reach for something more powerful.

Start with print. If that doesn't work, move to logging. If that doesn't work, break out the debugger. But don't skip straight to the debugger because you think print statements are beneath you. The best debugging tool is the one that finds the bug fastest.

Now go forth and print with confidence.

Send a Message