Custom base type and suboptimal query plans
Hi, The short version: I have a database where I converted an integer primary key column to a custom base type that pretty much amounts to a wrapper around an integer, and now some queries are resulting in much slower query plans. Does Postgres have special optimizations for integers that are not available for custom types, or did I perhaps overlook something? The longer version: I have implemented a custom base type that encodes a "type" (which has a fixed number of values) and a smaller integer into a single 4-byte value. I've been using this type with success in some parts of the database - it's much faster than a record type and more convenient than using multiple columns - so I'm trying to extend its use to more tables. The full implementation is available online: SQL: https://g.blicky.net/vndb.git/tree/sql/vndbid.sql?id=30070e326f18789f8b82252090b269166d5ade22 C: https://g.blicky.net/vndb.git/tree/sql/c/vndbfuncs.c?id=30070e326f18789f8b82252090b269166d5ade22 But now I'm running into cases where queries that used to perform really well suddenly end up getting a much worse query plan. As an example, observe the following query, run after doing a VACUUM FULL ANALYZE. The fast version with integer columns: => EXPLAIN (ANALYZE,BUFFERS) SELECT count(*) FROM chars c WHERE c.id IN(SELECT cid FROM traits_chars WHERE tid IN(1957, 75)); QUERY PLAN - Aggregate (cost=5635.45..5635.46 rows=1 width=8) (actual time=1.273..1.274 rows=1 loops=1) Buffers: shared hit=904 -> Nested Loop (cost=4145.94..5631.93 rows=1410 width=0) (actual time=0.525..1.249 rows=301 loops=1) Buffers: shared hit=904 -> HashAggregate (cost=4145.65..4159.59 rows=1394 width=4) (actual time=0.515..0.579 rows=301 loops=1) Group Key: traits_chars.cid Batches: 1 Memory Usage: 81kB Buffers: shared hit=301 -> Bitmap Heap Scan on traits_chars (cost=19.79..4142.12 rows=1410 width=4) (actual time=0.078..0.426 rows=301 loops=1) Recheck Cond: (tid = ANY ('{1957,75}'::integer[])) Heap Blocks: exact=295 Buffers: shared hit=301 -> Bitmap Index Scan on traits_chars_tid (cost=0.00..19.43 rows=1410 width=0) (actual time=0.039..0.039 rows=301 loops=1) Index Cond: (tid = ANY ('{1957,75}'::integer[])) Buffers: shared hit=6 -> Index Only Scan using chars_pkey1 on chars c (cost=0.29..1.07 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=301) Index Cond: (id = traits_chars.cid) Heap Fetches: 0 Buffers: shared hit=603 Planning: Buffers: shared hit=190 Planning Time: 0.650 ms Execution Time: 1.372 ms (23 rows) Same query, but now the chars.id and traits_chars.cid are of the custom 'vndbid' type: QUERY PLAN - Aggregate (cost=2019373.51..2019373.52 rows=1 width=8) (actual time=2273.986..2273.987 rows=1 loops=1) Buffers: shared hit=2917 -> Nested Loop Semi Join (cost=19.71..2019370.01 rows=1400 width=0) (actual time=0.227..2273.965 rows=301 loops=1) Join Filter: (c.id = traits_chars.cid) Rows Removed by Join Filter: 28788543 Buffers: shared hit=2917 -> Seq Scan on chars c (cost=0.00..3573.94 rows=95794 width=4) (actual time=0.007..7.717 rows=95794 loops=1) Buffers: shared hit=2616 -> Materialize (cost=19.71..4125.57 rows=1400 width=4) (actual time=0.000..0.010 rows=301 loops=95794) Buffers: shared hit=301 -> Bitmap Heap Scan on traits_chars (cost=19.71..4118.57 rows=1400 width=4) (actual time=0.080..0.438 rows=301 loops=1) Recheck Cond: (tid = ANY ('{1957,75}'::integer[])) Heap Blocks: exact=295 Buffers: shared hit=301 -> Bitmap Index Scan on traits_chars_tid (cost=0.00..19.36 rows=1400 width=0) (actual time=0.042..0.042 rows=301 loops=1) Index Cond: (tid = ANY ('{1957,75}'::integer[])) Buffers: shared hit=6 Planning: Buffers: shared hit=178 Planning Time: 0.565 ms Execution Time: 2274.181 ms (21 rows) The row estimates for the traits_chars subquery are nearly identical in both plans and the row estimates for the chars table in the second plan is accurate, which leads me to suspect that this is not a statistics issue. I suspected that my custom type may be missing some operators or functio
Re: Custom base type and suboptimal query plans
(forgot to cc the list, sorry for the duplicate, Tom!) On 2021-02-23, Tom Lane wrote: > Ayo writes: > > I have a database where I converted an integer primary key column to a > > custom base type that pretty much amounts to a wrapper around an > > integer, and now some queries are resulting in much slower query plans. > > Does Postgres have special optimizations for integers that are not > > available for custom types, or did I perhaps overlook something? > > The slow query isn't using the chars_pkey1 index, which makes one > wonder if you have a corresponding index in the custom-type case, The index exists and is usable in the custom-type case: => explain (analyze,buffers) SELECT count(*) FROM chars WHERE id BETWEEN 'c1000' AND 'c2000'; QUERY PLAN Aggregate (cost=33.87..33.88 rows=1 width=8) (actual time=0.252..0.253 rows=1 loops=1) Buffers: shared hit=9 -> Index Only Scan using chars_pkey1 on chars (cost=0.29..31.47 rows=959 width=0) (actual time=0.042..0.179 rows=1001 loops=1) Index Cond: ((id >= 'c1000'::vndbid) AND (id <= 'c2000'::vndbid)) Heap Fetches: 0 Buffers: shared hit=9 Planning: Buffers: shared hit=97 Planning Time: 0.383 ms Execution Time: 0.308 ms (10 rows) > or if you fat-fingered something about the index operator class > for the custom type. As of v13 I don't think there's anything in > that area that custom types can't replicate ... but there certainly > is plenty of infrastructure for the standard types that you'll need > to build out if you want equivalent functionality. Good to know that this ought to be possible, at least. Is there documentation about what infrastructure exists and how it interacts with the planner? I've built upon https://www.postgresql.org/docs/13/xindex.html and implemented everything that seemed relevant for the type. No doubt I've missed something, but I can't really tell what that may be.
Re: Custom base type and suboptimal query plans
On 2021-02-23, Tom Lane wrote: > Have you tried "enable_seqscan = off" (and maybe also disable merge > and hash joins) to see if you can force choice of that plan? No luck. It uses the index now, but seemingly only to loop over it. The integer version uses a HashAggregate, I must have missed something in my implementation to make the planner avoid that node. Does it have any special type requirements, other than the hash operator class? set enable_seqscan = off; set enable_hashjoin = off; set enable_mergejoin = off; EXPLAIN (ANALYZE,BUFFERS) SELECT count(*) FROM chars c WHERE c.id IN(SELECT cid FROM traits_chars WHERE tid IN(1957, 75)); QUERY PLAN - Aggregate (cost=2036624.55..2036624.56 rows=1 width=8) (actual time=2275.163..2275.164 rows=1 loops=1) Buffers: shared hit=567 -> Nested Loop Semi Join (cost=20.10..2036621.02 rows=1412 width=0) (actual time=1.402..2275.143 rows=301 loops=1) Join Filter: (c.id = traits_chars.cid) Rows Removed by Join Filter: 28803593 Buffers: shared hit=567 -> Index Only Scan using chars_pkey1 on chars c (cost=0.29..2493.95 rows=95844 width=4) (actual time=0.016..5.955 rows=95844 loops=1) Heap Fetches: 0 Buffers: shared hit=264 -> Materialize (cost=19.80..4154.68 rows=1412 width=4) (actual time=0.000..0.010 rows=301 loops=95844) Buffers: shared hit=303 -> Bitmap Heap Scan on traits_chars (cost=19.80..4147.62 rows=1412 width=4) (actual time=0.085..0.636 rows=301 loops=1) Recheck Cond: (tid = ANY ('{1957,75}'::integer[])) Heap Blocks: exact=297 Buffers: shared hit=303 -> Bitmap Index Scan on traits_chars_tid (cost=0.00..19.45 rows=1412 width=0) (actual time=0.046..0.046 rows=301 loops=1) Index Cond: (tid = ANY ('{1957,75}'::integer[])) Buffers: shared hit=6 Planning: Buffers: shared hit=179 Planning Time: 0.578 ms Execution Time: 2275.328 ms (22 rows)
Re: Custom base type and suboptimal query plans
On 2021-02-23, Tom Lane wrote: > Hmm ... did you remember to set the oprcanhash property on the equality > operator? Aaah! That was it, I had totally missed the HASHES and MERGES options to CREATE OPERATOR. It works perfectly now that I set those. Do those options make sense for other operators besides equality, too? Many thanks for the help.