Improving Postgres query performance with explain/analyze
Database queries slower than you'd like? With Postgres' built-in explain/analyze capability you can identify what's making the query slow and fix it.
In this example, we'll analyze the performance of query called as part of an API GET endpoint.
First, we'll seed our test database with enough data to get meaningful query analysis results. If the database has only a few records, Postgres may execute queries differently than if we have >100k records in production and the analysis results may not match what we'd see in prod. To seed the database with about 100k records for this example, I used a load test - k6 in these examples - that repeatedly hits an account creation endpoint.
Now we'll use a second load test - this time one that accesses the GET endpoint whose performance we're testing:
1import http from 'k6/http';
2import { check } from "k6";
3import { statusCheck200 } from "./util/statusCheck.js";
4
5export let options = {
6 stages: [
7 {duration: "30s", target: 100}
8 ]
9};
10
11export default function () {
12 const username = "5mIEbFU7jhcC1CG2";
13 const response = http.get(`http://localhost:8080/accounts/${username}`);
14
15 check(response, statusCheck200);
16};
The output from k6 is pretty extensive, but for these examples we'll focus on the p95 of the request duration and the requests per second:
1http_req_duration..........: avg=295.36ms min=8.99ms med=237.01ms max=1.64s p(90)=687ms p(95)=777.33ms
2iterations.................: 4927 164.230705/s
A p95 >750ms is not great, especially in this example where this endpoint is only fetching few
rows from one table using a simple WHERE
clause and a single inner join.
This is the initial database schema:
1CREATE TABLE account_holder(
2 id BIGSERIAL PRIMARY KEY,
3 user_name TEXT UNIQUE NOT NULL,
4 first_name TEXT,
5 last_name TEXT
6);
7
8CREATE TYPE bank_account_type AS ENUM ('CHECKING', 'SAVINGS');
9CREATE TABLE bank_account(
10 id BIGSERIAL PRIMARY KEY,
11 account_holder_id BIGINT REFERENCES account_holder ON DELETE CASCADE,
12 account_type bank_account_type NOT NULL
13);
And this is the query that's being executed:
1SELECT bank_account.id, bank_account.account_holder_id, bank_account.account_type
2FROM bank_account join account_holder ON bank_account.account_holder_id = account_holder.id
3WHERE account_holder.user_name = '5mIEbFU7jhcC1CG2';
Now let's use Postgres' EXPLAIN ANALYZE
to find the cause of the slowness in this query.
To do that, all we need to do is prepend EXPLAIN ANALYZE
to the beginning of our
query:
1EXPLAIN ANALYZE SELECT bank_account.id, bank_account.account_holder_id, bank_account.account_type
2FROM bank_account join account_holder ON bank_account.account_holder_id = account_holder.id
3WHERE account_holder.user_name = '5mIEbFU7jhcC1CG2';
And Postgres will output the performance breakdown of each part of the query:
1Hash Join (cost=8.45..2222.69 rows=1 width=20) (actual time=0.745..14.165 rows=1 loops=1)
2 Hash Cond: (bank_account.account_holder_id = account_holder.id)
3 -> Seq Scan on bank_account (cost=0.00..1908.75 rows=116375 width=20) (actual time=0.015..6.204 rows=116375 loops=1)
4 -> Hash (cost=8.44..8.44 rows=1 width=8) (actual time=0.046..0.048 rows=1 loops=1)
5 Buckets: 1024 Batches: 1 Memory Usage: 9kB
6 -> Index Scan using account_holder_user_name_key on account_holder (cost=0.42..8.44 rows=1 width=8) (actual time=0.033..0.034 rows=1 loops=1)
7 Index Cond: (user_name = '5mIEbFU7jhcC1CG2'::text)
8Planning Time: 0.876 ms
9Execution Time: 14.268 ms
The output is thorough, and I won't attempt to walk through and explain each and every element here.
But there are key things to look for in this output, and one big red flag is any sequential scan like this one Seq Scan on bank_account ...
under the bank_account.account_holder_id = account_holder.id
row.
Those sequential scans are often the sign of a missing index - and in our case it looks like the foreign key column bank_account.account_holder_id
doesn't have an index on it (by default Postgres doesn't generate indexes on foreign key columns).
Looking back at the schema, we can see there is indeed no index on that bank_account.account_holder_id
column.
Let's add an index:
1CREATE INDEX bank_account_account_holder_id_idx on bank_account(account_holder_id);
Now we'll re-run the query with EXPLAIN ANALYZE
and see how the index affects performance:
1Nested Loop (cost=0.83..16.88 rows=1 width=20) (actual time=0.028..0.029 rows=1 loops=1)
2 -> Index Scan using account_holder_user_name_key on account_holder (cost=0.42..8.44 rows=1 width=8) (actual time=0.011..0.011 rows=1 loops=1)
3 Index Cond: (user_name = '5mIEbFU7jhcC1CG2'::text)
4 -> Index Scan using bank_account_account_holder_id_idx on bank_account (cost=0.42..8.44 rows=1 width=20) (actual time=0.013..0.013 rows=1 loops=1)
5 Index Cond: (account_holder_id = account_holder.id)
6Planning Time: 0.406 ms
7Execution Time: 0.056 ms
The sequential scan is gone - replaced with an index scan - and now the query only took 0.056 ms to execute! That's down from 14.268 ms before the index, a performance increase of 25,378%!
Now lets re-run our endpoint load test to see if the faster query translates to improved performance in the app as well:
1http_req_duration..........: avg=33.56ms min=1.99ms med=29.64ms max=871.99ms p(90)=55.82ms p(95)=71ms
2iterations.................: 44066 1468.796174/s
Now the p95 shrunk by a factor of ten, down from 777 ms to 71 ms! And the requests per second saw a similar improvement, increasing from 164 requests per second to 1468 per second after adding the index.
Conclusion
This is just one simple example of how using Postgres' EXPLAIN ANALYZE
capability can help
identify the causes of slow queries as well as verify the impact of performance improvements.