Context Managers Are Underrated

Context Managers Are Underrated

Everyone knows with open('file') as f. But context managers are useful for way more than files. Once you understand the pattern, you'll see opportunities to use them everywhere.

I've been writing Python for over a decade, and I still find new uses for context managers regularly. They're one of those language features that seem simple at first, but the more you use them, the more powerful they become. This post is everything I wish someone had told me early on.

What is a context manager?

A context manager is anything that works with Python's with statement. It defines what happens when you enter the with block and what happens when you exit (whether normally or due to an exception).

with something() as x:
    # Entry: setup happens here
    do_stuff(x)
    # Exit: cleanup happens automatically, even if exception occurs

The classic example is file handling:

# Without context manager - easy to forget close()
f = open('file.txt')
data = f.read()
f.close()  # What if an exception happens before this?

# With context manager - close() is guaranteed
with open('file.txt') as f:
    data = f.read()
# File is closed here, even if an exception occurred

This might seem like a minor convenience, but it's solving a real problem. In older codebases, you'll find leaky file handles everywhere because someone forgot to call close() or an exception happened between open() and close(). Context managers eliminate this entire class of bugs.

But context managers can do so much more.

Creating your own: the @contextmanager decorator

The easiest way to create a context manager is with contextlib.contextmanager:

from contextlib import contextmanager
import time

@contextmanager
def timer(name):
    start = time.time()
    yield  # This is where the 'with' block runs
    elapsed = time.time() - start
    print(f'{name} took {elapsed:.2f}s')

with timer('database query'):
    results = db.execute('SELECT * FROM users')
# Output: database query took 0.15s

The yield is the key. Code before yield runs on entry, code after runs on exit.

Handling exceptions properly

If your cleanup must run even on exceptions, use try/finally:

@contextmanager
def cd(path):
    '''Temporarily change to a directory.'''
    old_dir = os.getcwd()
    os.chdir(path)
    try:
        yield
    finally:
        os.chdir(old_dir)  # Always runs, even on exception

with cd('/tmp'):
    # We're in /tmp here
    do_something()
# Back to original directory

Real examples I use constantly

Timing code blocks:

@contextmanager
def timer(name):
    start = time.perf_counter()
    yield
    elapsed = time.perf_counter() - start
    print(f'{name}: {elapsed:.3f}s')

# Use it anywhere you want to measure performance
with timer('data processing'):
    df = process_large_dataset(data)
# Output: data processing: 2.347s

This is incredibly useful for quick performance debugging. I'll often sprinkle these throughout code when investigating slowdowns, then remove them once I've found the bottleneck.

Temporary environment variables:

@contextmanager
def temp_env(**kwargs):
    '''Temporarily set environment variables.'''
    old = {k: os.environ.get(k) for k in kwargs}
    os.environ.update(kwargs)
    try:
        yield
    finally:
        for k, v in old.items():
            if v is None:
                os.environ.pop(k, None)
            else:
                os.environ[k] = v

with temp_env(DEBUG='true', API_KEY='test'):
    run_tests()

This is a lifesaver for testing. No more accidentally leaving test credentials in your environment or having tests affect each other through shared state.

Suppressing specific exceptions:

from contextlib import suppress

# Instead of:
try:
    os.remove('file.txt')
except FileNotFoundError:
    pass

# Do:
with suppress(FileNotFoundError):
    os.remove('file.txt')

Use this sparingly. Suppressing exceptions should be intentional and specific. But when you really do want to ignore certain errors, this is much cleaner than an empty except block.

Redirecting stdout:

from contextlib import redirect_stdout
from io import StringIO

output = StringIO()
with redirect_stdout(output):
    print('This goes to the StringIO')

captured = output.getvalue()

This is perfect for testing functions that print output, or for capturing verbose library output that you can't control.

More practical patterns

Atomic file writes:

import os
import tempfile
from contextlib import contextmanager

@contextmanager
def atomic_write(filepath, mode='w'):
    '''Write to a temp file, then move to final location on success.'''
    dirname = os.path.dirname(filepath)
    with tempfile.NamedTemporaryFile(
        mode=mode,
        dir=dirname,
        delete=False
    ) as temp_file:
        temp_path = temp_file.name
        try:
            yield temp_file
            temp_file.flush()
            os.fsync(temp_file.fileno())
        except:
            os.unlink(temp_path)
            raise
        else:
            os.replace(temp_path, filepath)

# If anything fails, original file is unchanged
with atomic_write('config.json') as f:
    json.dump(config, f)

This pattern is crucial for important files. If your process crashes mid-write, you don't end up with a corrupted file. The original stays intact until the new version is completely written.

Database transaction pattern:

@contextmanager
def transaction(db):
    '''Automatic commit/rollback on success/failure.'''
    try:
        yield db
        db.commit()
    except Exception:
        db.rollback()
        raise

with transaction(database) as db:
    db.execute('INSERT INTO users VALUES (?)', (user,))
    db.execute('INSERT INTO profiles VALUES (?)', (profile,))
# Both commits succeed or both rollback - no partial state

This ensures your database operations are atomic. Either everything succeeds or nothing does. No more forgetting to rollback on error.

Temporary attribute changes:

@contextmanager
def temp_attr(obj, attr, value):
    '''Temporarily change an object's attribute.'''
    old_value = getattr(obj, attr)
    setattr(obj, attr, value)
    try:
        yield obj
    finally:
        setattr(obj, attr, old_value)

# Useful for testing
with temp_attr(logger, 'level', logging.DEBUG):
    # This section has debug logging
    process_data()
# Back to original log level

Timing with threshold alerts:

@contextmanager
def timer_alert(name, threshold_seconds=1.0):
    '''Time code and alert if it exceeds threshold.'''
    start = time.perf_counter()
    yield
    elapsed = time.perf_counter() - start
    if elapsed > threshold_seconds:
        logger.warning(f'{name} took {elapsed:.2f}s (threshold: {threshold_seconds}s)')

with timer_alert('API call', threshold_seconds=0.5):
    response = requests.get(url)

This helped me catch performance regressions early. Set thresholds for critical operations and get alerted when they slow down.

Built-in context managers you should know

Python's standard library is full of useful context managers:

tempfile.TemporaryDirectory:

with tempfile.TemporaryDirectory() as tmpdir:
    # Create files in tmpdir
    with open(f'{tmpdir}/data.txt', 'w') as f:
        f.write('temp data')
# Directory and all contents are automatically deleted

threading.Lock:

lock = threading.Lock()

with lock:
    # Only one thread can be here at a time
    shared_resource.update()

decimal.localcontext:

from decimal import Decimal, localcontext

with localcontext() as ctx:
    ctx.prec = 50  # High precision for this block only
    result = Decimal('1') / Decimal('7')

unittest.mock.patch:

with patch('module.function', return_value=42):
    result = code_that_calls_function()
    assert result == 42
# Original function is restored

The class-based approach

For more complex context managers, implement __enter__ and __exit__:

class DatabaseConnection:
    def __init__(self, connection_string):
        self.connection_string = connection_string
        self.conn = None

    def __enter__(self):
        self.conn = create_connection(self.connection_string)
        return self.conn

    def __exit__(self, exc_type, exc_val, exc_tb):
        if exc_type is not None:
            self.conn.rollback()
        else:
            self.conn.commit()
        self.conn.close()
        return False  # Don't suppress exceptions

with DatabaseConnection('postgres://...') as conn:
    conn.execute('INSERT INTO users ...')
# Commits on success, rolls back on exception, always closes

The __exit__ method receives exception info if one occurred. Return True to suppress the exception, False to let it propagate.

Use the class-based approach when:
- You need to maintain state across entry and exit
- Your context manager is reusable with different configurations
- The setup logic is complex and benefits from being in __init__
- You want to expose additional methods beyond just the context manager protocol

Here's a more complete example showing these benefits:

class ProgressTracker:
    def __init__(self, total, desc='Progress'):
        self.total = total
        self.desc = desc
        self.current = 0
        self.start_time = None

    def __enter__(self):
        self.start_time = time.time()
        self.update(0)
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        elapsed = time.time() - self.start_time
        print(f'\n{self.desc} complete in {elapsed:.2f}s')
        return False

    def update(self, amount=1):
        '''Update progress - can be called from within the with block.'''
        self.current += amount
        percent = (self.current / self.total) * 100
        print(f'\r{self.desc}: {percent:.1f}%', end='', flush=True)

with ProgressTracker(total=100, desc='Processing items') as progress:
    for item in items:
        process(item)
        progress.update()

The extra methods like update() make class-based context managers more flexible than simple decorators.

Combining multiple context managers

You can nest them:

with open('input.txt') as infile:
    with open('output.txt', 'w') as outfile:
        outfile.write(infile.read())

Or combine on one line:

with open('input.txt') as infile, open('output.txt', 'w') as outfile:
    outfile.write(infile.read())

Or use parentheses for many:

with (
    open('file1.txt') as f1,
    open('file2.txt') as f2,
    open('file3.txt') as f3,
):
    process(f1, f2, f3)

Common gotchas and how to avoid them

Gotcha 1: Forgetting the try/finally in @contextmanager

# BAD - cleanup won't run if exception occurs
@contextmanager
def bad_lock():
    lock.acquire()
    yield
    lock.release()  # Never runs if exception occurs

# GOOD - cleanup always runs
@contextmanager
def good_lock():
    lock.acquire()
    try:
        yield
    finally:
        lock.release()  # Always runs

Gotcha 2: Yielding more than once

# BAD - will raise RuntimeError
@contextmanager
def broken():
    yield 1
    yield 2  # Can't do this!

# The @contextmanager decorator expects exactly one yield

Gotcha 3: Not returning False from exit

class SilentlyBroken:
    def __exit__(self, exc_type, exc_val, exc_tb):
        self.cleanup()
        # Implicitly returns None, which is falsy, so exceptions propagate
        # But if you accidentally return True...
        return True  # BAD - suppresses ALL exceptions!

# Be explicit
class Correct:
    def __exit__(self, exc_type, exc_val, exc_tb):
        self.cleanup()
        return False  # Explicitly don't suppress exceptions

Gotcha 4: Exceptions during cleanup

@contextmanager
def risky_cleanup():
    resource = acquire()
    try:
        yield resource
    finally:
        # What if this raises an exception?
        resource.cleanup()  # Could hide the original exception

# Better approach
@contextmanager
def safe_cleanup():
    resource = acquire()
    try:
        yield resource
    finally:
        try:
            resource.cleanup()
        except Exception as e:
            logger.error(f'Cleanup failed: {e}')
            # Original exception still propagates

Gotcha 5: Using the wrong order with multiple context managers

# The order matters!
# These cleanup in reverse order: second closes, then first
with open('input.txt') as infile, open('output.txt', 'w') as outfile:
    outfile.write(infile.read())

# If you need specific cleanup order, be explicit about it

Advanced patterns

Reentrant context managers:

from threading import RLock

class ReentrantResource:
    def __init__(self):
        self._lock = RLock()  # RLock allows same thread to acquire multiple times

    def __enter__(self):
        self._lock.acquire()
        return self

    def __exit__(self, *args):
        self._lock.release()
        return False

# Can be used nested in same thread
resource = ReentrantResource()
with resource:
    with resource:  # This works with RLock
        do_stuff()

Context managers that return different values:

@contextmanager
def smart_open(filepath):
    '''Return different things based on file type.'''
    if filepath.endswith('.gz'):
        import gzip
        with gzip.open(filepath, 'rt') as f:
            yield f
    else:
        with open(filepath) as f:
            yield f

# Works transparently with compressed or uncompressed files
with smart_open('data.txt.gz') as f:
    data = f.read()

Chaining context managers programmatically:

from contextlib import ExitStack

def process_files(filenames):
    '''Open multiple files without deep nesting.'''
    with ExitStack() as stack:
        files = [stack.enter_context(open(fname)) for fname in filenames]
        # All files are open here
        for f in files:
            process(f)
    # All files are closed here

# ExitStack is incredibly powerful for dynamic numbers of resources

Creating context managers from callables:

from contextlib import closing

# For objects with a close() method but no context manager support
with closing(urllib.urlopen('http://example.com')) as page:
    data = page.read()
# page.close() is called automatically

Real-world web scraping example

Here's a complete example showing multiple patterns together:

import requests
import time
from contextlib import contextmanager

@contextmanager
def rate_limited_session(requests_per_second=1):
    '''Create a session with automatic rate limiting.'''
    session = requests.Session()
    last_request_time = [0]  # Use list for mutability

    original_request = session.request
    def rate_limited_request(*args, **kwargs):
        elapsed = time.time() - last_request_time[0]
        sleep_time = (1.0 / requests_per_second) - elapsed
        if sleep_time > 0:
            time.sleep(sleep_time)

        response = original_request(*args, **kwargs)
        last_request_time[0] = time.time()
        return response

    session.request = rate_limited_request

    try:
        yield session
    finally:
        session.close()

# Use it
with rate_limited_session(requests_per_second=2) as session:
    for url in urls:
        response = session.get(url)
        # Automatically rate limited to 2 req/sec
        process(response)

This pattern encapsulates complex behavior (rate limiting, session management) into a clean, reusable context manager.

Testing with context managers

Context managers make testing much cleaner:

@contextmanager
def mock_environment():
    '''Set up complete test environment.'''
    # Setup
    db = create_test_database()
    cache = create_test_cache()
    old_env = os.environ.copy()
    os.environ.update({'TESTING': 'true', 'DB_URL': db.url})

    try:
        yield {'db': db, 'cache': cache}
    finally:
        # Cleanup happens even if test fails
        db.cleanup()
        cache.cleanup()
        os.environ.clear()
        os.environ.update(old_env)

def test_user_creation():
    with mock_environment() as env:
        user = create_user('test@example.com')
        assert env['db'].get_user(user.id) == user
    # Everything cleaned up automatically

Performance considerations

Context managers have minimal overhead. The with statement is compiled to efficient bytecode. Don't avoid them for performance reasons.

However, be aware of what work you're doing in setup/teardown:

# BAD - expensive setup on every iteration
for item in items:
    with DatabaseConnection() as conn:  # Connects every iteration!
        conn.insert(item)

# GOOD - setup once
with DatabaseConnection() as conn:
    for item in items:
        conn.insert(item)  # Reuse connection

When to use context managers

Any time you have:

  • Setup and teardown: Acquire a resource, use it, release it
  • Before and after: Change state, do something, restore state
  • Error handling: Ensure cleanup runs even on exceptions
  • Temporary state: Change something temporarily, then restore it
  • Resource management: Files, locks, connections, transactions
  • Timing and monitoring: Measure duration, log entry/exit
  • Mocking and testing: Set up test state, guarantee cleanup

If you find yourself writing try/finally, consider whether a context manager would be cleaner. If you're doing the same setup/teardown in multiple places, definitely extract it to a context manager.

Quick reference: decorator vs class

Use @contextmanager when:
- Simple setup/teardown logic
- Don't need to maintain instance state
- Just need it for one-off utility

Use class-based (__enter__/__exit__) when:
- Complex state management
- Want additional methods
- Creating reusable library code
- Need fine control over exception handling

The bottom line

Context managers are one of Python's best features. They make resource management automatic and error-proof. Once you start thinking in terms of 'enter this context, do stuff, exit', you'll find uses everywhere.

The @contextmanager decorator makes them trivial to create. Start small: next time you write a try/finally, ask yourself if a context manager would make it clearer. Then watch as you find more and more places to use them.

They're not just for files. They're for any time you need to guarantee cleanup, temporarily change state, or measure what's happening in a block of code. Master this pattern and your Python code will become more robust, more readable, and more maintainable.

Send a Message