Re: sql statement not using all primary key values and poor performance
On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote: > we have a SQL from Postgresql JDBC, primary is based on > (bigint,varchar2,bigint), > but from sql plan, it convert to ::numeric so the plan just use one "varchar" > key column and use the other 2 bigint keys as filters. what's the cause about > that ? > > Table "test.xx" > Column | Type | Collation | Nullable | > Default > --++---+--+- > xxxid | bigint | | not null | > paramname | character varying(512) | | not null | > paramvalue | character varying(1536) | | | > sssid | bigint | | not null | > createtime | timestamp(0) without time zone | | | > lastmodifiedtime | timestamp(0) without time zone | | | > mmmuuid | character varying(32) | | | > Indexes: > "pk_xx" PRIMARY KEY, btree (xxxid, paramname, sssid) > "idx_xx_mmmuuid" btree (sssid, mmmuuid, paramname) > > SET extra_float_digits = 3 > > duration: 7086.014 ms plan: > Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, CREATETIME, > LASTMODIFIEDTIME, MMMUUID FROM test.XX WHERE ( ( XXXID = $1 ) ) AND ( > ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) ) > Index Scan using pk_xx on test.xx (cost=0.57..2065259.09 > rows=1 width=86) (actual time=7086.010..7086.011 rows=0 loops=1) > Output: confid, paramname, paramvalue, sssid, createtime, > lastmodifiedtime, mmmuuid > Index Cond: ((xx.paramname)::text = 'cdkkif'::text) <<< > just use only one key instead all primary keys. > Filter: (((xx.xxxid)::numeric = '18174044'::numeric) AND > ((xx.sssid)::numeric = '253352'::numeric)) <<< it's bigint but > converted to numeric > Buffers: shared read=1063470 > I/O Timings: read=4402.029 > > it's from JDBC, we saw this JDBC driver try to set extra_float_digits = 3 > before > running the SQL ,does that make planner to convert bigint to numeric ? Setting "extra_float_digits" is just something the JDBC driver does so as to not lose precision with "real" and "double precision" values on old versions of PostgreSQL. The problem is that you bind the query parameters with the wrong data types. Don't use "setBigDecimal()", but "setLong()" if you want to bind a "bigint". An alternative is "setObject()" with "targetSqlType" set to "Types.BIGINT". Yours, Laurenz Albe
Re: sql statement not using all primary key values and poor performance
it's a third-party vendor application, not easy to change their code. is it possible to 1) in Postgresql JDBC driver connection, set plan_cache_mode=force_custom_plan or 2) some other parameters can workaround this issue? Thanks, James Laurenz Albe 於 2024年2月23日週五 下午5:17寫道: > On Fri, 2024-02-23 at 15:20 +0800, James Pang wrote: > >we have a SQL from Postgresql JDBC, primary is based on > (bigint,varchar2,bigint), > > but from sql plan, it convert to ::numeric so the plan just use one > "varchar" > > key column and use the other 2 bigint keys as filters. what's the cause > about that ? > > > > Table "test.xx" > > Column | Type | Collation | > Nullable | Default > > > --++---+--+- > > xxxid | bigint | | not > null | > > paramname| character varying(512) | | not > null | > > paramvalue | character varying(1536)| | > | > > sssid | bigint | | not > null | > > createtime | timestamp(0) without time zone | | > | > > lastmodifiedtime | timestamp(0) without time zone | | > | > > mmmuuid | character varying(32) | | > | > > Indexes: > > "pk_xx" PRIMARY KEY, btree (xxxid, paramname, sssid) > > "idx_xx_mmmuuid" btree (sssid, mmmuuid, paramname) > > > > SET extra_float_digits = 3 > > > > duration: 7086.014 ms plan: > > Query Text: SELECT XXXFID, PARAMNAME, PARAMVALUE, SSSID, > CREATETIME, LASTMODIFIEDTIME, MMMUUID FROM test.XX WHERE ( ( XXXID = > $1 ) ) AND ( ( PARAMNAME = $2 ) ) AND ( ( SSSID = $3 ) ) > > Index Scan using pk_xx on test.xx > (cost=0.57..2065259.09 rows=1 width=86) (actual time=7086.010..7086.011 > rows=0 loops=1) > > Output: confid, paramname, paramvalue, sssid, createtime, > lastmodifiedtime, mmmuuid > > Index Cond: ((xx.paramname)::text = 'cdkkif'::text) > <<< just use only one key instead all primary keys. > > Filter: (((xx.xxxid)::numeric = '18174044'::numeric) AND > ((xx.sssid)::numeric = '253352'::numeric))<<< it's bigint but > converted to numeric > > Buffers: shared read=1063470 > > I/O Timings: read=4402.029 > > > > it's from JDBC, we saw this JDBC driver try to set extra_float_digits = > 3 before > > running the SQL ,does that make planner to convert bigint to numeric ? > > Setting "extra_float_digits" is just something the JDBC driver does so as > to > not lose precision with "real" and "double precision" values on old > versions > of PostgreSQL. > > The problem is that you bind the query parameters with the wrong data > types. > Don't use "setBigDecimal()", but "setLong()" if you want to bind a > "bigint". > An alternative is "setObject()" with "targetSqlType" set to "Types.BIGINT". > > Yours, > Laurenz Albe >
Re: sql statement not using all primary key values and poor performance
On Fri, 2024-02-23 at 18:21 +0800, James Pang wrote: > it's a third-party vendor application, not easy to change their code. Then the application is broken, and you should make the vendor fix it. > is it possible to 1) in Postgresql JDBC driver connection, set > plan_cache_mode=force_custom_plan or 2) some other parameters can > workaround this issue? You can set "prepareThreshold" to 0 to keep the JDBC driver from using prepared statements in PostgreSQL. I am not sure if that is enough to fix the problem. Yours, Laurenz Albe
