Best Practice Recipes

Production-ready code patterns for every connection type, plus app-level design patterns.

What pgmonkey Does Behind the Scenes

Before diving into recipes, here is what pgmonkey handles automatically so you do not have to:

Connection Recipes

normal

Normal (Synchronous) Connection

Best for: scripts, CLI tools, simple sync apps. One connection, auto-managed.

from pgmonkey import PGConnectionManager

manager = PGConnectionManager()
connection = manager.get_database_connection('config.yaml', 'normal')

# Context manager commits on success, rolls back on exception
with connection as conn:
    with conn.cursor() as cur:
        cur.execute('SELECT version();')
        print(cur.fetchone())

# Connection is cached - calling again returns the same instance
same_conn = manager.get_database_connection('config.yaml', 'normal')
assert same_conn is connection

pgmonkey handles: Caching (same config = same connection), atexit cleanup.

pool

Pooled (Synchronous) Connection

Best for: Flask, Django, multi-threaded web apps. Borrows/returns connections from a shared pool.

from pgmonkey import PGConnectionManager

manager = PGConnectionManager()
pool = manager.get_database_connection('config.yaml', 'pool')

# Each 'with' block borrows a connection, returns it when done
with pool as conn:
    with conn.cursor() as cur:
        cur.execute('SELECT count(*) FROM users;')
        print(cur.fetchone())

# Pool stays open - next 'with' block borrows another connection
with pool as conn:
    with conn.cursor() as cur:
        cur.execute('SELECT now();')
        print(cur.fetchone())

# Calling get_database_connection again returns the SAME pool (cached)
same_pool = manager.get_database_connection('config.yaml', 'pool')
assert same_pool is pool

pgmonkey handles: Pool creation, caching (prevents pool storms), atexit cleanup.

async

Async Connection

Best for: single async tasks, lightweight async scripts. One async connection.

import asyncio
from pgmonkey import PGConnectionManager

async def main():
    manager = PGConnectionManager()
    connection = await manager.get_database_connection('config.yaml', 'async')

    async with connection as conn:
        async with conn.cursor() as cur:
            await cur.execute('SELECT version();')
            print(await cur.fetchone())

    # Connection is cached - awaiting again returns the same instance
    same_conn = await manager.get_database_connection('config.yaml', 'async')
    assert same_conn is connection

if __name__ == "__main__":
    asyncio.run(main())

pgmonkey handles: Caching, atexit cleanup (async connections closed via temporary event loop).

async_pool

Async Pooled Connection

Best for: FastAPI, aiohttp, high-concurrency async web apps. Borrows/returns connections from an async pool.

import asyncio
from pgmonkey import PGConnectionManager

async def main():
    manager = PGConnectionManager()
    pool = await manager.get_database_connection('config.yaml', 'async_pool')

    # Each 'async with' borrows a connection, returns it on exit
    async with pool as conn:
        async with conn.cursor() as cur:
            await cur.execute('SELECT count(*) FROM orders;')
            print(await cur.fetchone())

    # Pool stays open - reuse it
    async with pool as conn:
        async with conn.cursor() as cur:
            await cur.execute('SELECT now();')
            print(await cur.fetchone())

    # Pool is cached - awaiting again returns the SAME pool
    same_pool = await manager.get_database_connection('config.yaml', 'async_pool')
    assert same_pool is pool

    # Clean up when done (or let atexit handle it)
    await manager.clear_cache_async()

if __name__ == "__main__":
    asyncio.run(main())

pgmonkey handles: Pool creation, connection borrow/return, auto commit/rollback, caching (prevents pool storms), atexit cleanup.

App-Level Design Patterns

These minimal examples show how to integrate pgmonkey into real applications.

Flask / Sync Web App

Database Class for Flask

A reusable database class for synchronous web frameworks. The pool is created once and shared across all requests.

from pgmonkey import PGConnectionManager

class Database:
    def __init__(self, config_path):
        self.manager = PGConnectionManager()
        self.config_path = config_path
        # Pool is created on first call, cached thereafter
        self.pool = self.manager.get_database_connection(config_path, 'pool')

    def fetch_one(self, query, params=None):
        with self.pool as conn:
            with conn.cursor() as cur:
                cur.execute(query, params)
                return cur.fetchone()

    def fetch_all(self, query, params=None):
        with self.pool as conn:
            with conn.cursor() as cur:
                cur.execute(query, params)
                return cur.fetchall()

    def execute(self, query, params=None):
        with self.pool as conn:
            with conn.cursor() as cur:
                cur.execute(query, params)

# Usage in Flask
from flask import Flask

app = Flask(__name__)
db = Database('/path/to/config.yaml')

@app.route('/users')
def list_users():
    rows = db.fetch_all('SELECT id, name FROM users ORDER BY id;')
    return {'users': [{'id': r[0], 'name': r[1]} for r in rows]}
FastAPI / Async Web App

AsyncDatabase Class for FastAPI

A reusable async database class for async web frameworks. The pool is created once at startup and shared across all requests.

import asyncio
from pgmonkey import PGConnectionManager

class AsyncDatabase:
    def __init__(self, config_path):
        self.manager = PGConnectionManager()
        self.config_path = config_path
        self.pool = None

    async def connect(self):
        # Pool is created and cached on first call
        self.pool = await self.manager.get_database_connection(
            self.config_path, 'async_pool'
        )

    async def disconnect(self):
        await self.manager.clear_cache_async()

    async def fetch_one(self, query, params=None):
        async with self.pool as conn:
            async with conn.cursor() as cur:
                await cur.execute(query, params)
                return await cur.fetchone()

    async def fetch_all(self, query, params=None):
        async with self.pool as conn:
            async with conn.cursor() as cur:
                await cur.execute(query, params)
                return await cur.fetchall()

    async def execute(self, query, params=None):
        async with self.pool as conn:
            async with conn.cursor() as cur:
                await cur.execute(query, params)

# Usage in FastAPI
from fastapi import FastAPI

app = FastAPI()
db = AsyncDatabase('/path/to/config.yaml')

@app.on_event("startup")
async def startup():
    await db.connect()

@app.on_event("shutdown")
async def shutdown():
    await db.disconnect()

@app.get("/orders")
async def list_orders():
    rows = await db.fetch_all('SELECT id, total FROM orders ORDER BY id;')
    return {"orders": [{"id": r[0], "total": r[1]} for r in rows]}

Environment Variable Interpolation Patterns

pgmonkey supports opt-in environment variable interpolation. Enable it with resolve_env=True in Python or --resolve-env on the CLI. Without it, configs work exactly as before.

local dev

Local Development with Environment Variables

Best for: keeping passwords out of config files during local development. Set env vars in your shell or with a .env loader of your choice.

# config.yaml
connection_type: 'normal'

connection_settings:
  user: '${PGUSER:-postgres}'
  password: '${PGPASSWORD}'
  host: '${PGHOST:-localhost}'
  port: '${PGPORT:-5432}'
  dbname: '${PGDATABASE:-mydb}'
# In your shell:
export PGPASSWORD=dev_password

# Test the connection
pgmonkey pgconfig test --connconfig config.yaml --resolve-env
# In Python:
from pgmonkey import PGConnectionManager

manager = PGConnectionManager()
conn = manager.get_database_connection('config.yaml', resolve_env=True)

Tip: Non-sensitive values like host use defaults (${PGHOST:-localhost}) so the config works out of the box on a dev machine. Sensitive values like password have no default and will fail with a clear error if the env var is missing.

docker

Docker / Docker Compose

Best for: containerized deployments where secrets are passed as environment variables via docker run -e or a docker-compose.yml file.

# config.yaml - committed to your repo (no secrets)
connection_type: 'pool'

connection_settings:
  user: '${PGUSER:-postgres}'
  password: '${PGPASSWORD}'
  host: '${PGHOST:-db}'
  port: '${PGPORT:-5432}'
  dbname: '${PGDATABASE:-appdb}'

pool_settings:
  min_size: 5
  max_size: 20
# docker-compose.yml
services:
  db:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: appdb

  app:
    build: .
    depends_on: [db]
    environment:
      PGPASSWORD: ${DB_PASSWORD}
      PGHOST: db
      PGDATABASE: appdb
# app.py
from pgmonkey import PGConnectionManager

manager = PGConnectionManager()
pool = manager.get_database_connection('config.yaml', 'pool', resolve_env=True)

with pool as conn:
    with conn.cursor() as cur:
        cur.execute('SELECT version();')
        print(cur.fetchone())
# Run with a .env file or inline:
DB_PASSWORD=secretpass docker compose up

How it works: Docker Compose injects PGPASSWORD and PGHOST into the app container. pgmonkey resolves them at startup via resolve_env=True. The config.yaml is safe to commit - it contains no secrets, only ${VAR} references.

kubernetes

Kubernetes Secrets with from_file

Best for: production Kubernetes deployments. Mount secrets as files and let pgmonkey read them at startup.

# config.yaml
connection_type: 'async_pool'

connection_settings:
  user: '${PGUSER:-appuser}'
  password:
    from_file: /var/run/secrets/db/password
  host: '${PGHOST:-db-service}'
  port: '${PGPORT:-5432}'
  dbname: '${PGDATABASE:-appdb}'
  sslmode: 'require'
# Python (FastAPI example):
from pgmonkey import PGConnectionManager

manager = PGConnectionManager()

async def startup():
    pool = await manager.get_database_connection(
        'config.yaml', 'async_pool', resolve_env=True
    )

How from_file works: pgmonkey reads the file and trims the trailing newline, matching the Kubernetes Secret convention. If the file is missing, EnvInterpolationError is raised immediately at startup - not on the first query.

redaction

Redacting Secrets for Logging

Best for: logging resolved configs without leaking credentials.

from pgmonkey import load_config, redact_config

cfg = load_config('config.yaml', resolve_env=True)

# Safe to log - sensitive values are masked
print(redact_config(cfg))
# {'connection_settings': {'password': '***REDACTED***', 'host': 'db.prod.com', ...}}

What gets redacted: password, sslkey, sslcert, sslrootcert, and any key containing token, secret, or credential. Empty strings and None are left as-is.

CLI

Testing Configs with Environment Interpolation from the Command Line

The pgconfig test and pgconfig generate-code subcommands both accept --resolve-env to resolve ${VAR} references before connecting.

# Test a config that uses ${VAR} references
pgmonkey pgconfig test --connconfig config.yaml --resolve-env

# Override the connection type for the test
pgmonkey pgconfig test --connconfig config.yaml --resolve-env --connection-type pool

# Allow ${VAR:-default} on sensitive keys (e.g. password) - local dev convenience
pgmonkey pgconfig test --connconfig config.yaml --resolve-env --allow-sensitive-defaults

# Generate example code with env vars resolved
pgmonkey pgconfig generate-code --connconfig config.yaml --resolve-env

How --resolve-env works: Without this flag, ${PGHOST:-localhost} is passed as a literal string to psycopg (which will fail). With it, pgmonkey resolves all ${VAR} patterns and from_env/from_file references before connecting.

When to use --allow-sensitive-defaults: By default, pgmonkey blocks ${PGPASSWORD:-devpass} because a fallback password in a config file is dangerous in production. If you genuinely want defaults on sensitive keys (e.g. for a local dev config), pass --allow-sensitive-defaults to opt in.

Quick Reference

Connection Type Best For Cached? Context Manager Pool Reusable?
normal Scripts, CLI tools Yes with conn: N/A
pool Flask, Django, threaded apps Yes with pool: borrows/returns Yes
async Async scripts, lightweight tasks Yes async with conn: N/A
async_pool FastAPI, aiohttp, high concurrency Yes async with pool: borrows/returns Yes

Cache Management API

pgmonkey exposes cache management methods on PGConnectionManager:

Method Description
manager.cache_info Returns dict with size and connection_types of cached connections.
manager.clear_cache() Disconnects all cached connections and clears the cache. Use from sync code.
await manager.clear_cache_async() Same as above, but safe to call from async code (inside an event loop).
force_reload=True Pass to get_database_connection() to replace a cached connection with a fresh one.

Note on caching with resolve_env=True: The cache key is computed from the resolved config. If an environment variable changes between calls, the new value produces a different cache key and a new connection is created. The old connection remains cached until you call clear_cache() or the process exits. In normal deployments this is a non-issue since environment variables do not change mid-process.