Must be superuser to create subscriptions - Any way I can avoid using superuser

2021-07-01 Thread Avi Weinberg
I would like to avoid using superuser to create subscriptions (logical 
replication).  If I do not use superuser, I get the following error message 
"must be superuser to create subscriptions".  Is there a way to void using 
superuser to create a subscription?  What is pg_subscription_users that I read 
being discussed?

Thanks!
IMPORTANT - This email and any attachments is intended for the above named 
addressee(s), and may contain information which is confidential or privileged. 
If you are not the intended recipient, please inform the sender immediately and 
delete this email: you should not copy or use this e-mail for any purpose nor 
disclose its contents to any person.


Damaged (during upgrade?) table, how to repair?

2021-07-01 Thread W.P.

Hi there,

I have a PG 11.12 (was 9.5 or 9.5) upgraded from 9.x (during OS upgrade).

One table is now damaged, trying to dump it results in server restart, 
message is "invalid record length  maximum is yyy" (from memory).


Also fails pg_dumpall.

How can I (?) repair this table? (for recent data this works OK).


Laurent






Insert/Dump/Restore table with generated columns

2021-07-01 Thread zickzack
Hi,

I have several tables with generated columns. If I restore the plain dumped 
data (insert statements from pg_dump) I'll get the error message "Column xyz is 
a generated column.". The exception is understandably, no question (and is well 
documented). In case of the error no insert takes place. 
My problem now is that my simple backup/restore workflow is corrupted, cause 
those tables with generated column will be empty.

## The question is:

Does some "simple" workaround exists to prevent this?
Is it somehow possible to dump only non-generated columns?
Is it somwhow possible to ignore the error and allow the other columns to be 
inserted?
Any other handy solutions? 

## simple example 

```sql
CREATE OR REPLACE FUNCTION generate_person_age(birth timestamptz) RETURNS 
double precision
LANGUAGE sql IMMUTABLE
AS $$
select EXTRACT(years FROM justify_interval(now() - birth));
$$; 
DROP TABLE IF EXISTS person;
CREATE TABLE person (
id serial primary key
,name text NOT NULL
,birth timestamptz NOT NULL
,age double precision GENERATED ALWAYS as (generate_person_age(birth)) STORED
); 
insert into person(name, birth) values
('Peter Pan', '1902-01-01'); 
-- leeds to exception
insert into person(id, name, birth, age) values
(1, 'Peter Pan', '1902-01-01 00:00:00+00', '121');
``` 

* exception
```
cannot insert into column "age"
Column "age" is a generated column.
``` 

## Solution ideas

* A colleague had the idea to change the generated columns into a normal one 
and to use a trigger on changes of the source columns. Inserts should not be 
different than a default column.
* Writing a before insert trigger, that removes the generated column values. 
But seems not that performant, cause this will fire on every insert, not only 
in restores.
* Excluding tables with generated columns during pg_dump. Exporting tables with 
generated columns with hand written sql. Seems like a good source of many bugs 
and a maintainance problem. 
Greetings!




Damaged (during upgrade?) table, how to repair?

2021-07-01 Thread W.P.

Hi there,

I have a PG 11.12 (was 9.5 or 9.5) upgraded from 9.x (during OS upgrade).

One table is now damaged, trying to dump it results in server restart, 
message is "invalid record length  maximum is yyy" (from memory).


Also fails pg_dumpall.

How can I (?) repair this table? (for recent data this works OK).


Laurent





Re: Insert/Dump/Restore table with generated columns

2021-07-01 Thread David Rowley
On Thu, 1 Jul 2021 at 22:06,  wrote:
> I have several tables with generated columns. If I restore the plain dumped 
> data (insert statements from pg_dump) I'll get the error message "Column xyz 
> is a generated column.". The exception is understandably, no question (and is 
> well documented). In case of the error no insert takes place.
> My problem now is that my simple backup/restore workflow is corrupted, cause 
> those tables with generated column will be empty.

As far as I can see, this shouldn't happen.  I tried to recreate and I can't.

create table ab (a int, b int generated always as (a / 2) stored);
insert into ab values(1);

Running:

pg_dump --table=ab --column-inserts postgres

I see the following in the pg_dump output.

INSERT INTO public.ab (a, b) VALUES (1, DEFAULT);

pg_dump --table=ab --inserts postgres

gives:

INSERT INTO public.ab VALUES (1, DEFAULT);

both of these commands work fine when I run them on the existing database.

I tested this on current master, but looking at the history [1], it
looks like the pg_dump support was added when the feature went in, so
that indicates that it was not missed then subsequently fixed later.

Just to keep us from having to guess, are you able to share the
version of PostgreSQL you're running? Also, the version of pg_dump?
pg_dump --version will tell you that.

David

[1] 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fc22b6623b6b3bab3cb057ccd282c2bfad1a0b30




Greatest of a list of columns?

2021-07-01 Thread Ron

Postgresql 12.5


What's the canonical Postgresql method for doing, for example, this?
SELECT relname, MAXOF(last_vacuum, last_autovacuum)
FROM pg_stat_user_tables;

Seeing both last_vacuum and last_autovacuum is useful, of course, but 
sometimes I only want to see the "really" last time it was vacuumed.


I can hard code a case statement, but a generic multi-column solution is 
preferred.


Thanks

--
Angular momentum makes the world go 'round.




Re: Greatest of a list of columns?

2021-07-01 Thread Pavel Stehule
čt 1. 7. 2021 v 15:26 odesílatel Ron  napsal:

> Postgresql 12.5
>
>
> What's the canonical Postgresql method for doing, for example, this?
> SELECT relname, MAXOF(last_vacuum, last_autovacuum)
> FROM pg_stat_user_tables;
>
> Seeing both last_vacuum and last_autovacuum is useful, of course, but
> sometimes I only want to see the "really" last time it was vacuumed.
>
> I can hard code a case statement, but a generic multi-column solution is
> preferred.
>

 postgres=# select greatest(current_date, current_date + 1);
┌┐
│  greatest  │
╞╡
│ 2021-07-02 │
└┘
(1 row)

Pavel



> Thanks
>
> --
> Angular momentum makes the world go 'round.
>
>
>


Re: Greatest of a list of columns?

2021-07-01 Thread Pavel Stehule
čt 1. 7. 2021 v 15:27 odesílatel Pavel Stehule 
napsal:

>
>
> čt 1. 7. 2021 v 15:26 odesílatel Ron  napsal:
>
>> Postgresql 12.5
>>
>>
>> What's the canonical Postgresql method for doing, for example, this?
>> SELECT relname, MAXOF(last_vacuum, last_autovacuum)
>> FROM pg_stat_user_tables;
>>
>> Seeing both last_vacuum and last_autovacuum is useful, of course, but
>> sometimes I only want to see the "really" last time it was vacuumed.
>>
>> I can hard code a case statement, but a generic multi-column solution is
>> preferred.
>>
>
>  postgres=# select greatest(current_date, current_date + 1);
> ┌┐
> │  greatest  │
> ╞╡
> │ 2021-07-02 │
> └┘
> (1 row)
>
>
https://www.postgresql.org/docs/current/functions-conditional.html


> Pavel
>
>
>
>> Thanks
>>
>> --
>> Angular momentum makes the world go 'round.
>>
>>
>>


Re: Damaged (during upgrade?) table, how to repair?

2021-07-01 Thread Laurenz Albe
On Thu, 2021-07-01 at 10:56 +0200, W.P. wrote:
> I have a PG 11.12 (was 9.5 or 9.5) upgraded from 9.x (during OS upgrade).
> 
> One table is now damaged, trying to dump it results in server restart, 
> message is "invalid record length  maximum is yyy" (from memory).
> 
> How can I (?) repair this table? (for recent data this works OK).

If you have a backup, take that.

If not, hire an expert in data recovery.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





EXPLAIN with anonymous DO block?

2021-07-01 Thread Ron

Postgresql 12.5

There's a query inside a DO block which -- because it's parameterized -- I'd 
rather analyze while it's in the FOR loop of a DO block, instead of pulling 
it out and hard-coding the parameters.


Is this possible?  If so, where do I put the EXPLAIN statement?

--
Angular momentum makes the world go 'round.




Re: EXPLAIN with anonymous DO block?

2021-07-01 Thread Michael Lewis
It sounds like you are wanting to run 'explain analyze [query]' inside a
loop inside a DO block. That isn't possible as far as I know, but
auto_explain and log_nested_statements should be able to let you profile
the whole thing and perhaps you can pick out the part you want from the
logs.


Re: EXPLAIN with anonymous DO block?

2021-07-01 Thread Tom Lane
Ron  writes:
> There's a query inside a DO block which -- because it's parameterized -- I'd 
> rather analyze while it's in the FOR loop of a DO block, instead of pulling 
> it out and hard-coding the parameters.

> Is this possible?

No.

The thing to do to duplicate the behavior of a plpgsql query is
to set it up as a PREPAREd statement (with parameters for any
plpgsql variables it references) and use EXPLAIN EXECUTE.
This also works if you're curious about the behavior of a
query issued via PQexecParams or the like.

It's recommendable to repeat the EXPLAIN half a dozen times
to see if the plancache switches from a custom to a generic
plan.  (In recent PG releases, changing plan_cache_mode
is another way to check what happens.)

regards, tom lane




Re: EXPLAIN with anonymous DO block?

2021-07-01 Thread David G. Johnston
On Thu, Jul 1, 2021 at 9:22 AM Michael Lewis  wrote:

> It sounds like you are wanting to run 'explain analyze [query]' inside a
> loop inside a DO block. That isn't possible as far as I know, but
> auto_explain and log_nested_statements should be able to let you profile
> the whole thing and perhaps you can pick out the part you want from the
> logs.
>

I believe it can be done technically, though basically the function will
need to be re-written for the purpose. It isn't as simple as adding an
explain somewhere since the output of explain is a result set.  But as you
are already using pl/pgsql  then your parameters can just be done up as
variables instead and that query should be able to be explained.

David J.


Re: EXPLAIN with anonymous DO block?

2021-07-01 Thread Pavel Luzanov

Good day!

There's a query inside a DO block which -- because it's parameterized -- I'd
rather analyze while it's in the FOR loop of a DO block, instead of pulling
it out and hard-coding the parameters.
Is this possible?

No.


Why not to use auto_explain module?

postgres=# LOAD 'auto_explain';
LOAD
postgres=# SET auto_explain.log_min_duration = 0;
SET
postgres=# SET auto_explain.log_nested_statements = on;
SET
postgres=# SET auto_explain.log_analyze = on;
SET
postgres=# SET auto_explain.log_level = 'NOTICE';
SET
postgres=# DO $$BEGIN FOR i IN 112 .. 113 LOOP PERFORM * FROM pg_class 
WHERE oid = i::oid; END LOOP; END;$$;

NOTICE:  duration: 0.013 ms  plan:
Query Text: SELECT * FROM pg_class WHERE oid = i::oid
Index Scan using pg_class_oid_index on pg_class  (cost=0.27..8.29 rows=1 
width=265) (actual time=0.009..0.011 rows=1 loops=1)

  Index Cond: (oid = '112'::oid)
NOTICE:  duration: 0.016 ms  plan:
Query Text: SELECT * FROM pg_class WHERE oid = i::oid
Index Scan using pg_class_oid_index on pg_class  (cost=0.27..8.29 rows=1 
width=265) (actual time=0.008..0.009 rows=1 loops=1)

  Index Cond: (oid = '113'::oid)
DO

Pavel Luzanov
Postgres Professional: https://postgrespro.com
The Russian Postgres Company






Re: Damaged (during upgrade?) table, how to repair?

2021-07-01 Thread W.P.

W dniu 01.07.2021 o 16:19, Laurenz Albe pisze:

On Thu, 2021-07-01 at 10:56 +0200, W.P. wrote:

I have a PG 11.12 (was 9.5 or 9.5) upgraded from 9.x (during OS upgrade).

One table is now damaged, trying to dump it results in server restart,
message is "invalid record length  maximum is yyy" (from memory).

How can I (?) repair this table? (for recent data this works OK).

If you have a backup, take that.

If not, hire an expert in data recovery.

Yours,
Laurenz Albe


I have something similar to backup: old database, but it is 9.5, how can 
I install 9.5 binaries / libs / config on Fedora 30 i386 (with PG11 
installed) side-by-side?



Laurent





Re: Damaged (during upgrade?) table, how to repair?

2021-07-01 Thread Adrian Klaver

On 7/1/21 12:56 PM, W.P. wrote:

W dniu 01.07.2021 o 16:19, Laurenz Albe pisze:

On Thu, 2021-07-01 at 10:56 +0200, W.P. wrote:
I have a PG 11.12 (was 9.5 or 9.5) upgraded from 9.x (during OS 
upgrade).


One table is now damaged, trying to dump it results in server restart,
message is "invalid record length  maximum is yyy" (from memory).

How can I (?) repair this table? (for recent data this works OK).

If you have a backup, take that.

If not, hire an expert in data recovery.

Yours,
Laurenz Albe


I have something similar to backup: old database, but it is 9.5, how can 
I install 9.5 binaries / libs / config on Fedora 30 i386 (with PG11 
installed) side-by-side?


Is the old database on another machine where it can be started and then 
the data dumped?





Laurent






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PGDLLIMPORT: patch or not to patch

2021-07-01 Thread Craig Ringer
On Wed, 30 Jun 2021 at 04:49, Tom Lane  wrote:

> George Tarasov  writes:
> > So, my questions are there any rules / descriptions / agreements inside
> > the PostgreSQL Project that define which global variables inside a core
> > code should by specified by a PGDLLIMPORT and which should not?? Or
> > there is freedom; you need this variable in the extension (under
> > Windows), make patch for it yourself! Or there is plan in the community
> > that all global non-static variables should be PGDLLIMPORT-ed by default
> > in the future?? What the right way to propose the PGDLLIMPORT patch to
> > the master and back-ported PostgreSQL code in order to avoid dup patches
> > in the extensions?
>
> Our policy so far has been to add PGDLLIMPORT to variables for which
> someone makes a case that an extension would have a reasonable use
> for it.  The bar's not terribly high, but it does exist.  The idea of
> just doing a blanket s/extern/extern PGDLLIMPORT/g has been discussed
> and rejected, because we don't want to commit to supporting absolutely
> every global variable as something that's okay for extensions to touch.
>

I agree that it doesn't make sense to mark all of them as a blanket rule.

I'd like to explicitly tag *non*-exported externs as
__attribute__(("hidden")) on GCC-alike ELF systems to ensure that extension
authors don't rely on them then later find they cannot be used on Windows.
Obviously wrapped in some PG_NO_EXPORT or PG_DLL_HIDDEN macro.

I'm updating a patch at the moment that makes all GUC storage and most
variables computed from GUCs during hook execution PGDLLIMPORT. It might
make sense to follow that up with a patch to make non-export vars hidden.
But I vaguely recall raising this before and some folks not being a fan of
the extra noise on each line?


Re: Damaged (during upgrade?) table, how to repair?

2021-07-01 Thread W.P.

W dniu 01.07.2021 o 22:27, Adrian Klaver pisze:

On 7/1/21 12:56 PM, W.P. wrote:

W dniu 01.07.2021 o 16:19, Laurenz Albe pisze:

On Thu, 2021-07-01 at 10:56 +0200, W.P. wrote:
I have a PG 11.12 (was 9.5 or 9.5) upgraded from 9.x (during OS 
upgrade).


One table is now damaged, trying to dump it results in server restart,
message is "invalid record length  maximum is yyy" (from memory).

How can I (?) repair this table? (for recent data this works OK).

If you have a backup, take that.

If not, hire an expert in data recovery.

Yours,
Laurenz Albe


I have something similar to backup: old database, but it is 9.5, how 
can I install 9.5 binaries / libs / config on Fedora 30 i386 (with 
PG11 installed) side-by-side?


Is the old database on another machine where it can be started and 
then the data dumped?


It is on another disc, with OS that has problems (F24 does not boot 
beyond single user mode, Network Manager doesn't start) after removing 
power / battery while in "suspend to RAM".


Thats why I think about setting 9.5 on my current machine (F30/PG11 
cluster) and then do a dump, only I need to know how to do it (binaries 
/ libs / default cfg + data copy).



Laurent