Avoiding out of date statistics / planner

2020-02-12 Thread Tim Kane
Every now and again, I will encounter an unexplained long-running query.

It’s a head scratcher moment, because this query that is still running for
20 minutes (not blocking) can be run independently in about 500ms

I can only assume that the problem query ran against the table(s) at a time
when it was perhaps in need of a vacuum analyze...  I’m guessing here, that
the table had seen some amount of change and simply had out of date
statistics.

How can I avoid this?
The auto-vacuum daemon is doing it’s thing, but there is always going to be
an opportunity for a query to sneak in against a table that has recently
seen large change, but not yet been analysed.

On the application side, we can explicitly issue a VACUUM ANALYZE after
each bulk operation - and often that is precisely what happens..

But - I am keenly aware that this cannot be performed within a transaction.
That means there is always a (small) window in which a query can still
execute in this scenario.

Are there any other best practices that can mitigate this kind of problem?

It’s rare, sure - but I don’t like sweeping these under the rug.

I’m on PG 9.6.. perhaps there are planner improvements since then that
might reduce the incidence of these (rare) issues.

Any advice appreciated, thanks.


Re: Avoiding out of date statistics / planner

2020-02-13 Thread Tim Kane
That looks very useful indeed. Thanks Tomas

On Wed, Feb 12, 2020 at 8:32 PM Tomas Vondra 
wrote:

> On Wed, Feb 12, 2020 at 10:23:22AM -0700, Michael Lewis wrote:
> >It may also be worth noting that it is possible to make autovacuum/analyze
> >more aggressive, perhaps only on the tables that see large changes in data
> >that might result in a statistics issue. If you could share a query,
> >explain analyze output, and pseudo code or at least description of what
> >sort of bulk operations are being done, then more insight could be
> offered.
>
> Another thing you can do is deploy auto_explain, and log explain plan
> for long-runnning queries. That won't fix the root cause, but it will
> help you with confirming the root cause - you'll see the query plan,
> which should give you enough context.
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


btree_gist extension - gbt_cash_union return type

2020-05-22 Thread Tim Kane
I've noticed a discrepancy in the return type for the gbt_cash_union
function...

On fresh instances of postgres 9.6.11, where the btree_gist extension is
newly created (version 1.2) yields a gbt_cash_union function with a return
type of gbtreekey16

While instances that have been upgraded from 9.6.2 to 9.6.11, where the
btree_gist was originally installed as 1.0 and then upgraded from 1.0 to
1.2 - that same function has a return type of gbtreekey8

I expect something is missing from the extension upgrade script to modify
the return type.

Is it safe/recommended to modify this function to return gbtreekey16?
Perhaps safer still to drop the extension and recreate it?

Thanks in advance. If this should be considered a bug, and there is a
better place to report such - please advise.

Tim


Re: btree_gist extension - gbt_cash_union return type

2020-05-22 Thread Tim Kane
Thank you Tom, I appreciate the thorough explanation.

Good to confirm that it’s of no consequence.

Tim

On Fri, 22 May 2020 at 15:44, Tom Lane  wrote:

> Tim Kane  writes:
> > I've noticed a discrepancy in the return type for the gbt_cash_union
> > function...
> > On fresh instances of postgres 9.6.11, where the btree_gist extension is
> > newly created (version 1.2) yields a gbt_cash_union function with a
> return
> > type of gbtreekey16
>
> ... which is correct.
>
> > While instances that have been upgraded from 9.6.2 to 9.6.11, where the
> > btree_gist was originally installed as 1.0 and then upgraded from 1.0 to
> > 1.2 - that same function has a return type of gbtreekey8
>
> Hm.  I think this is an oversight in commit 749a787c5; we were focused
> on fixing the functions' argument types and forgot that there were any
> return-type changes.
>
> However, I'm not too fussed about it.  Nothing checks those signatures
> at run-time, so it's basically cosmetic.  The reason for the pushups
> in 749a787c5 was to ensure that we could name the functions in ALTER
> FUNCTION; but that just depends on the argument types, so it's not
> a reason to worry either.
>
> > Is it safe/recommended to modify this function to return gbtreekey16?
>
> I wouldn't sweat about it.  If you did want to fix it, it'd have to be
> a manual UPDATE on pg_proc, there not being any ALTER FUNCTION way
> to do it.  On the whole, the risk of fat-fingering the update and
> thereby hosing your database seems to outweigh any benefit.
>
> > Perhaps safer still to drop the extension and recreate it?
>
> That would force dropping the indexes that depend on it, so
> it seems like a big overreaction.
>
> regards, tom lane
>


Alter domain type / avoiding table rewrite

2019-04-16 Thread Tim Kane
So I have a situation where I would like to modify a field that is
currently a domain type over a varchar(9)

Specifically:
CREATE DOMAIN old_type AS varchar(9)

This isn't ideal, let's just say.. legacy.


I wish to modify this type.. ideally to a text type with a length
constraint.. or even just a slightly larger varchar(12) would suffice..

CREATE DOMAIN new_type AS text;
ALTER DOMAIN new_type ADD CONSTRAINT check_len CHECK ((length(VALUE) <=
12)) NOT VALID;

ALTER TABLE target ALTER
COLUMN value SET DATA TYPE new_type;


But it seems impossible to achieve either without a full table rewrite.

This seems to boil down to DOMAIN types not being considered as binary
compatible..

I've tried using a custom CAST..

CREATE CAST (old_type AS new_type) WITHOUT FUNCTION AS IMPLICIT;

But that blows up, with:

WARNING:  cast will be ignored because the source date is a domain
ERROR: domain data types must not Be marked binary compatible


So I'm a little stuck at this point.

I feel like - if I can prove that the binary representation of both domains
are truly identical - I might be forced to modify the system tables as a
work around.. that scares me on a production system.

Is there a way around this that i'm not seeing?

I'm on PostgreSQL 9.6.2


Re: Alter domain type / avoiding table rewrite

2019-04-16 Thread Tim Kane
Thanks everyone..

It seems that the first step:

old_type --> varchar(9)

still requires a table rewrite, while the reverse direction does not.


I'm curious about the performance implication of domain types, i expect
that cost is only at insert/update time? I guess we've been wearing that
cost up until now.

Adrian is correct - the intention for the DOMAIN with CHECK approach was to
allow flexibility moving forward, as the data set is particularly large...

I'm now thinking that since promotion to a larger size is a non-issue, and
domain type seems to be not quite the panacea I hoped, then the use of
varchar(n) is perhaps not so terrible!

Thanks for the advice/suggestions/discussion :)


Re: Alter domain type / avoiding table rewrite

2019-04-17 Thread Tim Kane
On Tue, 16 Apr 2019 at 18:04, Adrian Klaver 
wrote:

>
>
> Where are you seeing the rewrite in your case?



I’m unfamiliar with ctid (which I now plan to read up on)… I’ve been
looking at *relfilenode*

I’ve observed that relfilenode changes when altering from *old_type *
*à varchar(9) *and the operation takes 6 seconds on this data set.



PSQL:alpha_db/postgres@[local]=# select relfilenode from pg_class where
relname='test';

relfilenode

-

20669469

(1 row)



PSQL:alpha_db/postgres@[local]=# alter table test alter COLUMN exec_id set
data type varchar(9);

ALTER TABLE

Time: 6605.454 ms


PSQL:alpha_db/postgres@[local]=# select relfilenode from pg_class where
relname='test';

relfilenode

-

20671802

(1 row)



And then the other way… from *varchar(9) **à old_type*

refilenode does not change, and the operation takes 0.3ms



PSQL:alpha_db/postgres@[local]=# alter table test alter COLUMN exec_id set
data type execid_t;

ALTER TABLE

Time: 1.360 ms

PSQL:alpha_db/postgres@[local]=# select relfilenode from pg_class where
relname='test';

relfilenode

-

20671802

(1 row)



Time: 0.331 ms



Apologies if this formats badly :-/ transcribing between devices not well
suited to email.

Tim

>


Re: Alter domain type / avoiding table rewrite

2019-04-17 Thread Tim Kane
On Wed, 17 Apr 2019 at 15:23, Adrian Klaver 
wrote:

>
>
> The table definition and the size of the data set would help with
> interpreting the below.




The below example shows the issue on a table with just a single field. I
can demonstrate the problem wether there are 100 records or a million
records.

In every case:
 Altering the type from a domain of varchar(9) to a raw varchar(9) results
in a full table rewrite (as identified by relfilenode).
 Altering the type from a raw varchar(9) to a domain of varchar(9) occurs
for free, with no change to relfilenode.

The timing of each ALTER operation appears to back this up.






postgres@[local]=# create domain old_type as varchar(9);
CREATE DOMAIN

postgres@[local]=# create table test (values old_type);
CREATE TABLE

postgres@[local]=# with data as (select generate_series(1,100),
md5(random()::text))
postgres@[local]-# insert into test select substring(md5, 1, 9) from data;
INSERT 0 100
Time: 4097.162 ms

postgres@[local]=# \d test
Table "alpha_core.test"
 Column |   Type   | Modifiers
+--+---
 values | old_type |

postgres@[local]=# \dD old_type
 List of domains
   Schema   |   Name   | Type | Modifier | Check
+--+--+--+---
 alpha_core | old_type | character varying(9) |  |
(1 row)

postgres@[local]=# select count(*) from test;
  count
-
 100
(1 row)



postgres@[local]=# select relfilenode from pg_class where relname='test';
 relfilenode
-
20689856
(1 row)


postgres@[local]=# alter table test alter COLUMN values set data type
varchar(9);
ALTER TABLE
Time: 993.271 ms


postgres@[local]=# select relfilenode from pg_class where relname='test';
 relfilenode
-
20691283
(1 row)

postgres@[local]=# alter table test alter COLUMN values set data type
old_type;
ALTER TABLE
Time: 21.569 ms


postgres@[local]=# select relfilenode from pg_class where relname='test';
 relfilenode
-
20691283
(1 row)

postgres@[local]=# drop table test;
DROP TABLE


Slony mailing list activity

2022-01-19 Thread Tim Kane
Apologies for posting here, but.. I have attempted to sign up to the Slony
mailing lists and have been waiting over a week to be approved.

I’m suspicious that the mailing list functionality there is broken,
perhaps? Or the moderator is MIA.

When attempting to sign up, the following is displayed:

*Your subscription request has been received, and will soon be acted upon.
Depending on the configuration of this mailing list, your subscription
request may have to be first confirmed by you via email, or approved by the
list moderator. If confirmation is required, you will soon get a
confirmation email which contains further instructions.*


Cheers,

Tim


Re: Slony mailing list activity

2022-01-19 Thread Tim Kane
I’ve just found the list maintainers address, perhaps that goes somewhere.

slony1-general-ow...@lists.slony.info


On Wed, 19 Jan 2022 at 18:50, Tim Kane  wrote:

> Apologies for posting here, but.. I have attempted to sign up to the Slony
> mailing lists and have been waiting over a week to be approved.
>
> I’m suspicious that the mailing list functionality there is broken,
> perhaps? Or the moderator is MIA.
>
> When attempting to sign up, the following is displayed:
>
> *Your subscription request has been received, and will soon be acted upon.
> Depending on the configuration of this mailing list, your subscription
> request may have to be first confirmed by you via email, or approved by the
> list moderator. If confirmation is required, you will soon get a
> confirmation email which contains further instructions.*
>
>
> Cheers,
>
> Tim
>