sqlpostgresqlpostgresql-9.3query-planner

Why is Postgres query planner affected by LIMIT?


EXPLAIN ANALYZE SELECT     "alerts"."id", 
            "alerts"."created_at", 
            't1'::text AS src_table 
 FROM       "alerts" 
 INNER JOIN "devices" 
 ON         "devices"."id" = "alerts"."device_id" 
 INNER JOIN "sites" 
 ON         "sites"."id" = "devices"."site_id" 
 WHERE      "sites"."cloud_id" = 111
 AND        "alerts"."created_at" >= '2019-08-30'
 ORDER BY   "created_at" DESC limit 9;

 Limit  (cost=1.15..36021.60 rows=9 width=16) (actual time=30.505..29495.765 rows=9 loops=1)
  ->  Nested Loop  (cost=1.15..232132.92 rows=58 width=16) (actual time=30.504..29495.755 rows=9 loops=1)
        ->  Nested Loop  (cost=0.86..213766.42 rows=57231 width=24) (actual time=0.029..29086.323 rows=88858 loops=1)
              ->  Index Scan Backward using alerts_created_at_index on alerts  (cost=0.43..85542.16 rows=57231 width=24) (actual time=0.014..88.137 rows=88858 loops=1)
                    Index Cond: (created_at >= '2019-08-30 00:00:00'::timestamp without time zone)
              ->  Index Scan using devices_pkey on devices  (cost=0.43..2.23 rows=1 width=16) (actual time=0.016..0.325 rows=1 loops=88858)
                    Index Cond: (id = alerts.device_id)
        ->  Index Scan using sites_pkey on sites  (cost=0.29..0.31 rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=88858)
              Index Cond: (id = devices.site_id)
              Filter: (cloud_id = 7231)
              Rows Removed by Filter: 1
Total runtime: 29495.816 ms

Now we change to LIMIT 10:

 EXPLAIN ANALYZE SELECT     "alerts"."id", 
            "alerts"."created_at", 
            't1'::text AS src_table 
 FROM       "alerts" 
 INNER JOIN "devices" 
 ON         "devices"."id" = "alerts"."device_id" 
 INNER JOIN "sites" 
 ON         "sites"."id" = "devices"."site_id" 
 WHERE      "sites"."cloud_id" = 111
 AND        "alerts"."created_at" >= '2019-08-30'
 ORDER BY   "created_at" DESC limit 10;

Limit  (cost=39521.79..39521.81 rows=10 width=16) (actual time=1.557..1.559 rows=10 loops=1)
  ->  Sort  (cost=39521.79..39521.93 rows=58 width=16) (actual time=1.555..1.555 rows=10 loops=1)
        Sort Key: alerts.created_at
        Sort Method: quicksort  Memory: 25kB
        ->  Nested Loop  (cost=5.24..39520.53 rows=58 width=16) (actual time=0.150..1.543 rows=11 loops=1)
              ->  Nested Loop  (cost=4.81..16030.12 rows=2212 width=8) (actual time=0.137..0.643 rows=31 loops=1)
                    ->  Index Scan using sites_cloud_id_index on sites  (cost=0.29..64.53 rows=31 width=8) (actual time=0.014..0.057 rows=23 loops=1)
                          Index Cond: (cloud_id = 7231)
                    ->  Bitmap Heap Scan on devices  (cost=4.52..512.32 rows=270 width=16) (actual time=0.020..0.025 rows=1 loops=23)
                          Recheck Cond: (site_id = sites.id)
                          ->  Bitmap Index Scan on devices_site_id_index  (cost=0.00..4.46 rows=270 width=0) (actual time=0.006..0.006 rows=9 loops=23)
                                Index Cond: (site_id = sites.id)
              ->  Index Scan using alerts_device_id_index on alerts  (cost=0.43..10.59 rows=3 width=24) (actual time=0.024..0.028 rows=0 loops=31)
                    Index Cond: (device_id = devices.id)
                    Filter: (created_at >= '2019-08-30 00:00:00'::timestamp without time zone)
                    Rows Removed by Filter: 12
Total runtime: 1.603 ms

alerts table has millions of records, other tables are counted in thousands.

I can already optimize the query by simply not using limit < 10. What I don't understand is why the LIMIT affects the performance. Perhaps there's a better way than hardcoding this magic number "10".


Solution

  • The number of result rows affects the PostgreSQL optimizer, because plans that return the first few rows quickly are not necessarily plans that return the whole result as fast as possible.

    In your case, PostgreSQL thinks that for small values of LIMIT, it will be faster by scanning the alerts table in the order of the ORDER BY clause using an index and just join the other tables using a nested loop until it has found 9 rows.

    The benefit of such a strategy is that it doesn't have to calculate the complete result of the join, then sort it and throw away all but the first few result rows. The danger is that it takes longer than expected to find the 9 matching rows, and this is what hits you:

    Index Scan Backward using alerts_created_at_index on alerts (cost=0.43..85542.16 rows=57231 width=24) (actual time=0.014..88.137 rows=88858 loops=1)

    So PostgreSQL has to process 88858 rows and use a nested loop join (which is inefficient if it has to loop often) until it finds 9 result rows. This may be because it underestimates the selectivity of the conditions, or because the many matching rows all happen to have low created_at.

    The number 10 just happens to be the cut-off point where PostgreSQL thinks it will no longer be more efficient to use that strategy, it is a value that will change as the data in the database change.

    You can avoid using that plan altogether by using an ORDER BY clause that does not match the index:

    ORDER BY created_at DESC NULLS LAST