Back to Blog
Engineering6 min read

Optimizing PostgreSQL Performance: Indexes, Query Planning, and Connection Pooling

Finding the Slow Queries First

Never optimize blind. Enable pg_stat_statements to track query performance across your entire database. This extension records execution time, call count, and row counts for every query. Run it for a week in production, then query it to find your worst offenders:

SELECT
  query,
  calls,
  mean_exec_time,
  total_exec_time,
  rows / calls AS avg_rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

Reading EXPLAIN ANALYZE

For each slow query, run EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT). The output shows exactly what the planner decided to do and how long each step took. Look for: Sequential Scan on large tables (needs an index), Nested Loop with high row estimates (may need a different join strategy), and high buffer hits vs reads (cache miss ratio). The actual vs estimated row counts tell you if table statistics are stale — run ANALYZE tablename to refresh them.

Index Strategy

Create indexes for the columns in your WHERE clauses, JOIN conditions, and ORDER BY clauses. But indexes have a write cost — every INSERT, UPDATE, and DELETE must update all indexes on the table. More indexes is not always better. The right indexes are the ones that eliminate sequential scans on large tables for your most frequent queries.

Use partial indexes for queries that always filter on a specific condition:

-- Only index active users — much smaller, faster for active-user queries
CREATE INDEX idx_users_active_email ON users(email) WHERE deleted_at IS NULL;

-- Composite index for a common filter + sort pattern
CREATE INDEX idx_orders_user_created ON orders(user_id, created_at DESC);

Connection Pooling with PgBouncer

PostgreSQL creates a new process for each connection, which is expensive. A Next.js app with 10 serverless function instances each holding 5 connections means 50 database connections — and serverless functions can scale to hundreds of instances. PgBouncer sits between your app and PostgreSQL, maintaining a small pool of real connections and multiplexing many application connections through them. Use transaction pooling mode for serverless deployments: each database query gets a connection from the pool for its duration, then releases it.

Slow Query Patterns to Avoid

N+1 queries are the most common performance issue in ORM-based applications. If you load 100 users and then query each user's orders separately, that is 101 queries instead of 1. Use Prisma's include or raw SQL with JOINs to fetch related data in one query. Use SELECT with only the columns you need — SELECT * on a wide table wastes memory and network bandwidth for every row returned.

Database performance is often the fastest win in a slow application. Reach out if you are dealing with PostgreSQL performance issues in production.

PostgreSQLPerformanceDatabaseBackend

Hire me for similar projects

Looking for a developer who can build what you just read about? Let's talk.

Get in Touch