Re: Oracle to Postgres - Transform Hash Partition

2024-06-07 Thread Laurenz Albe
On Thu, 2024-06-06 at 11:28 -0500, David Barbour wrote:
> Been an Oracle DBA for quite a while and we're moving from Oracle to Postgres.
> 
> I have a table that I need to transform into a hash partitioned table.
> I've gone through all the steps to create a hash partitioned table, with 8 
> partitions such as 
> 
> create table idev.assessment_result_2023_dab_part (like 
> idev.assessment_result_2023_dab)partition by hash (district_oid);
> 
> CREATE TABLE idev.assessment_result_2023_dab_part_assessment_result_2023_p1 
> PARTITION OF idev.assessment_result_2023_dab_part
>     FOR VALUES WITH (modulus 64, remainder 0)
> 
> etc through partition 2023_p8 FOR VALUES WITH (modulus 64, remainder 7)

That won't do.  If you use a modulus of 64, you need 64 partitions, one for
each possible division remainder.

If you want 8 partitions, you have to use modulus 8.

> Now I need to 'attach' the original table.  The problem I'm running into is
> there are no good examples of how to define the values.  
> 
> I've tried several iterations of various 'for values', 'values', 'for values 
> with',
> etc. but they all error out.
> 
> Here's an example:
>  alter table idev.assessment_result_2023_dab_part 
>  attach partition idev.assessment_result_2023_dab for values with(modulus 8, 
> remainder 1) to (modulus 8, remainder 7)
> 
> ERROR: syntax error at or near "to" LINE 2: ..._2023_dab for values 
> with(modulus 8, remainder 1) to (modulu...

I think you are confused about hash partitioning.

One partition is only for a single remainder.  You cannot have a partition for
several remainders.

Hash partitioning is mostly for splitting up a table into several parts of
roughly equal size.  You decide how many partitions you want; that will become
the modulus.  Then you have to create that many partitions, one for each 
remainder.

If you want to attach an existing table as a partition, that will only work if
all rows in the table belong into that partition.  Otherwise, you will get an
error.
So you typically won't be able to attach an existing table as a hash partition.

To convert an existing table into a hash partitioned table, you have to

- create a new, empty partitioned table with *all* its partitions

- transfer the data with "INSERT INTO hash_part_tab SELECT * FROM tab"

Yours,
Laurenz Albe




RE: Logical replication type- WAL recovery fails and changes the size of wal segment in archivedir

2024-06-07 Thread Meera Nair
Hi Ron,
We do have our own solution. We work for Commvault, a data protection company.
We support backup of PostgreSQL in different ways - streaming, snapshot, block 
level, backup from standby server.
PostgreSQL 
(commvault.com)

Regards,
Meera

From: Ron Johnson 
Sent: Thursday, June 6, 2024 9:56 PM
To: Laurenz Albe 
Cc: Meera Nair ; pgsql-general@lists.postgresql.org; Punit 
Pranesh Koujalgi ; B Ganesh Kishan 

Subject: Re: Logical replication type- WAL recovery fails and changes the size 
of wal segment in archivedir

You don't often get email from 
ronljohnso...@gmail.com. Learn why this is 
important
External email. Inspect before opening.

On Wed, Jun 5, 2024 at 6:26 AM Laurenz Albe 
mailto:laurenz.a...@cybertec.at>> wrote:
On Wed, 2024-06-05 at 06:36 +, Meera Nair wrote:
> 2024-06-05 11:41:32.369 IST [54369] LOG:  restored log file 
> "00050001006A" from archive
> 2024-06-05 11:41:33.112 IST [54369] LOG:  restored log file 
> "00050001006B" from archive
> cp: cannot stat ‘/home/pgsql/wmaster/00050001006C’: No such file 
> or directory
> 2024-06-05 11:41:33.167 IST [54369] LOG:  redo done at 1/6B000100
> 2024-06-05 11:41:33.172 IST [54369] FATAL:  archive file 
> "00050001006B" has wrong size: 0 instead of 16777216
> 2024-06-05 11:41:33.173 IST [54367] LOG:  startup process (PID 54369) exited 
> with exit code 1
> 2024-06-05 11:41:33.173 IST [54367] LOG:  terminating any other active server 
> processes
> 2024-06-05 11:41:33.174 IST [54375] FATAL:  archive command was terminated by 
> signal 3: Quit
> 2024-06-05 11:41:33.174 IST [54375] DETAIL:  The failed archive command was: 
> cp pg_wal/00050001006B 
> /home/pgsql/wmaster/00050001006B
> 2024-06-05 11:41:33.175 IST [54367] LOG:  archiver process (PID 54375) exited 
> with exit code 1
> 2024-06-05 11:41:33.177 IST [54367] LOG:  database system is shut down
>
> Here ‘/home/pgsql/wmaster’ is my archivedir (the folder where WAL segments 
> are restored from)
>
> Before attempting start, size of
> 00050001006B file was 16 MB.
> After failing to detect 00050001006C, there is a FATAL error 
> saying wrong size for 00050001006B
> Now the size of 00050001006B is observed as 2 MB. Size of all 
> other WAL segments remain 16 MB.
>
> -rw--- 1 postgres postgres  2359296 Jun  5 11:34 00050001006B

That looks like you have "archive_mode = always", and "archive_command" writes
back to the archive.  Don't do that.

In fact, don't write your own PITR backup process.  Use something like 
PgBackRest or BarMan.



RE: Logical replication type- WAL recovery fails and changes the size of wal segment in archivedir

2024-06-07 Thread Meera Nair
Hi Laurenz,

Thanks a lot! 

Regards,
Meera

-Original Message-
From: Laurenz Albe  
Sent: Wednesday, June 5, 2024 3:56 PM
To: Meera Nair ; pgsql-general@lists.postgresql.org
Cc: Punit Pranesh Koujalgi ; B Ganesh Kishan 

Subject: Re: Logical replication type- WAL recovery fails and changes the size 
of wal segment in archivedir

[You don't often get email from laurenz.a...@cybertec.at. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

External email. Inspect before opening.



On Wed, 2024-06-05 at 06:36 +, Meera Nair wrote:
> 2024-06-05 11:41:32.369 IST [54369] LOG:  restored log file 
> "00050001006A" from archive
> 2024-06-05 11:41:33.112 IST [54369] LOG:  restored log file 
> "00050001006B" from archive
> cp: cannot stat ‘/home/pgsql/wmaster/00050001006C’: No 
> such file or directory
> 2024-06-05 11:41:33.167 IST [54369] LOG:  redo done at 1/6B000100
> 2024-06-05 11:41:33.172 IST [54369] FATAL:  archive file 
> "00050001006B" has wrong size: 0 instead of 16777216
> 2024-06-05 11:41:33.173 IST [54367] LOG:  startup process (PID 54369) 
> exited with exit code 1
> 2024-06-05 11:41:33.173 IST [54367] LOG:  terminating any other active 
> server processes
> 2024-06-05 11:41:33.174 IST [54375] FATAL:  archive command was 
> terminated by signal 3: Quit
> 2024-06-05 11:41:33.174 IST [54375] DETAIL:  The failed archive 
> command was: cp pg_wal/00050001006B 
> /home/pgsql/wmaster/00050001006B
> 2024-06-05 11:41:33.175 IST [54367] LOG:  archiver process (PID 54375) 
> exited with exit code 1
> 2024-06-05 11:41:33.177 IST [54367] LOG:  database system is shut down
>
> Here ‘/home/pgsql/wmaster’ is my archivedir (the folder where WAL 
> segments are restored from)
>
> Before attempting start, size of
> 00050001006B file was 16 MB.
> After failing to detect 00050001006C, there is a FATAL 
> error saying wrong size for 00050001006B Now the size of 
> 00050001006B is observed as 2 MB. Size of all other WAL segments 
> remain 16 MB.
>
> -rw--- 1 postgres postgres  2359296 Jun  5 11:34 
> 00050001006B

That looks like you have "archive_mode = always", and "archive_command" writes 
back to the archive.  Don't do that.

Yours,
Laurenz Albe


Re: Poor performance after restoring database from snapshot on AWS RDS

2024-06-07 Thread Sam Kidman
> This is due to the way that RDS restores snapshots.

Thanks, I never would have guessed. Would vacuum analyze be sufficient
to defeat the lazy loading or would we need to do something more
specific to our application? (for example. select(*) on some commonly
used tables)

I think vacuum full would certainly defeat the lazy loading since it
would copy all of the table data, but that may take a very long time
to run. I think vacuum analyze only scans a subset of rows but I might
be wrong about that.

Best, Sam

On Wed, Jun 5, 2024 at 10:09 PM Jeremy Smith  wrote:
>
> On Wed, Jun 5, 2024 at 4:23 AM Sam Kidman  wrote:
>
> > We get very poor performance in the staging environment after this
> > restore takes place - after some usage it seems to get better perhaps
> > because of caching.
> >
>
> This is due to the way that RDS restores snapshots.
>
> From the docs 
> (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html):
>
> You can use the restored DB instance as soon as its status is
> available. The DB instance continues to load data in the background.
> This is known as lazy loading.
>
> If you access data that hasn't been loaded yet, the DB instance
> immediately downloads the requested data from Amazon S3, and then
> continues loading the rest of the data in the background.
>
>
>
>   -Jeremy




PG16.1 security breach?

2024-06-07 Thread Zwettler Markus (OIZ)
I am running the following on Postgres 16.1 in database "postgres" as a 
superuser:

revoke create on schema public from public;
revoke create on database postgres from public;
create schema if not exists oiz;
revoke create on schema oiz from public;
grant usage on schema oiz to public;

create or replace function oiz.f_set_dbowner (p_dbowner text, p_dbname text)
returns void
language plpgsql
security definer
as $$
...



when I create a new role in following:

create role testuser with password 'testuser' login;

postgres=# \du testuser
 List of roles
Role name | Attributes
---+
testuser  |



than this new role is able to execute the function oiz.f_set_dbowner 
immediately even I did not grant execute on this function to this role!

postgres=> \conninfo
You are connected to database "postgres" as user "testuser" on host 
"cmpgdb-pg-eng900.eng.cmp.szh.loc" (address "10.199.112.56") at port "5017".
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: 
off)
postgres=> select oiz.f_set_dbowner ('testuser','database1');
f_set_dbowner
---

(1 row)



The role is also able to execute the function even I revoke any execute 
privilege explicitly:

revoke execute on function oiz.f_set_dbowner (p_dbowner text, p_dbname text) 
from testuser;



There are also no default privileges on the schema:

postgres=# \ddp
 Default access privileges
Owner | Schema | Type | Access privileges
---++--+---
(0 rows)


postgres=> \df+ oiz.f_set_dbowner

   List of functions
Schema | Name  | Result data type |  Argument data types  | 
Type | Volatility | Parallel |  Owner   | Security |  Access privileges  | 
Language | Internal name | Description
+---+--+---+--++--+--+--+-+--+---+-
oiz| f_set_dbowner | void | p_dbowner text, p_dbname text | 
func | volatile   | unsafe   | postgres | definer  | =X/postgres+| 
plpgsql  |   |
|   |  |   |
  ||  |  |  | postgres=X/postgres | 
 |   |
(1 row)


postgres=> \l postgres
   List of databases
   Name   |  Owner   | Encoding | Locale Provider |   Collate   |Ctype| 
ICU Locale | ICU Rules |   Access privileges
--+--+--+-+-+-++---+---
postgres | postgres | UTF8 | libc| de_CH.utf-8 | de_CH.utf-8 |  
  |   | =Tc/postgres +
  |  |  | | | | 
   |   | postgres=CTc/postgres
(1 row)



What I am missing? Is there something new with PG 16? Is it a bug?



Cheers, Markus







Re: PG 14 pg_basebackup accepts --compress=server-zst option

2024-06-07 Thread Ron Johnson
On Fri, Jun 7, 2024 at 12:32 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thursday, June 6, 2024, Kashif Zeeshan  wrote:
>
>> Hi
>>
>> On Fri, Jun 7, 2024 at 6:54 AM Ron Johnson 
>> wrote:
>>
>>>
>>> https://www.postgresql.org/docs/14/app-pgbasebackup.html doesn't
>>> mention "--compress=[{client|server}-]method".  That first appears in the
>>> v15 docs.
>>>
>>> And yet pg_basebackup doesn't complain about an invalid option.
>>> (Technically, this is a bug; I first noticed it a week after copying a
>>> script from a PG 15 server to five PG 14 servers, and running it quite a
>>> few times without fail.)
>>>
>>
> Seems a bit suspect, but as your script doesn’t mention tar the option
> itself is apparently ignored, I guess silently.
>

Does this mean that "--compress=server-zst" is only relevant with
--format=tar?


> Assuming this isn’t an actual regression in behavior in a patch-released
> older version
>

My apologies for not mentioning the version: 14.12-1PGDG-rhel8.


> I don’t see us adding an error message at this point.
>

Me neither.  It just seemed odd.


Re: Poor performance after restoring database from snapshot on AWS RDS

2024-06-07 Thread Ron Johnson
On Fri, Jun 7, 2024 at 4:36 AM Sam Kidman  wrote:

> > This is due to the way that RDS restores snapshots.
>
> Thanks, I never would have guessed. Would vacuum analyze be sufficient
> to defeat the lazy loading or would we need to do something more
> specific to our application? (for example. select(*) on some commonly
> used tables)
>

https://www.postgresql.org/docs/14/pgprewarm.html

pg_prewarm is probably what you want.  Don't know if RDS Postgresql
supports it or not, though.


>
> I think vacuum full would certainly defeat the lazy loading since it
> would copy all of the table data, but that may take a very long time
> to run. I think vacuum analyze only scans a subset of rows but I might
> be wrong about that.
>
> Best, Sam
>
> On Wed, Jun 5, 2024 at 10:09 PM Jeremy Smith 
> wrote:
> >
> > On Wed, Jun 5, 2024 at 4:23 AM Sam Kidman  wrote:
> >
> > > We get very poor performance in the staging environment after this
> > > restore takes place - after some usage it seems to get better perhaps
> > > because of caching.
> > >
> >
> > This is due to the way that RDS restores snapshots.
> >
> > From the docs (
> https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html
> ):
> >
> > You can use the restored DB instance as soon as its status is
> > available. The DB instance continues to load data in the background.
> > This is known as lazy loading.
> >
> > If you access data that hasn't been loaded yet, the DB instance
> > immediately downloads the requested data from Amazon S3, and then
> > continues loading the rest of the data in the background.
> >
> >
> >
> >   -Jeremy
>
>
>


Re: PG16.1 security breach?

2024-06-07 Thread Joe Conway

On 6/7/24 07:04, Zwettler Markus (OIZ) wrote:
I am running the following on Postgres 16.1 in database "postgres" as a 
superuser:





create or replace function oiz.f_set_dbowner (p_dbowner text, p_dbname text)





create role testuser with password 'testuser' login;




than this new role is able to execute the function oiz.f_set_dbowner 
immediately even I did not grant execute on this function to this role!


See:
https://www.postgresql.org/docs/current/sql-createfunction.html

In particular, this part:
8<
Another point to keep in mind is that by default, execute privilege is 
granted to PUBLIC for newly created functions (see Section 5.7 for more 
information). Frequently you will wish to restrict use of a security 
definer function to only some users. To do that, you must revoke the 
default PUBLIC privileges and then grant execute privilege selectively. 
To avoid having a window where the new function is accessible to all, 
create it and set the privileges within a single transaction. For example:

8<

HTH,

--
Joe Conway
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com





AW: [Extern] Re: PG16.1 security breach?

2024-06-07 Thread Zwettler Markus (OIZ)
> -Ursprüngliche Nachricht-
> Von: Joe Conway 
> Gesendet: Freitag, 7. Juni 2024 15:22
> An: Zwettler Markus (OIZ) ; pgsql-
> gene...@lists.postgresql.org
> Betreff: [Extern] Re: PG16.1 security breach?
> 
> On 6/7/24 07:04, Zwettler Markus (OIZ) wrote:
> > I am running the following on Postgres 16.1 in database "postgres" as
> > a
> > superuser:
> 
> 
> 
> > create or replace function oiz.f_set_dbowner (p_dbowner text, p_dbname
> > text)
> 
> 
> 
> > create role testuser with password 'testuser' login;
> 
> 
> 
> > than this new role is able to execute the function oiz.f_set_dbowner
> > immediately even I did not grant execute on this function to this role!
> 
> See:
> https://www.postgresql.org/docs/current/sql-createfunction.html
> 
> In particular, this part:
> 8<
> Another point to keep in mind is that by default, execute privilege is 
> granted to
> PUBLIC for newly created functions (see Section 5.7 for more information).
> Frequently you will wish to restrict use of a security definer function to 
> only some
> users. To do that, you must revoke the default PUBLIC privileges and then 
> grant
> execute privilege selectively.
> To avoid having a window where the new function is accessible to all, create 
> it and
> set the privileges within a single transaction. For example:
> 8<
> 
> HTH,
> 
> --
> Joe Conway
> PostgreSQL Contributors Team
> RDS Open Source Databases
> Amazon Web Services: https://aws.amazon.com
> 
> --- Externe Email: Vorsicht mit Anhängen, Links oder dem Preisgeben von
> Informationen ---


Argh. No! What a bad habit!

Might be good idea for an enhancement request to create a global parameter to 
disable this habit.

Thanks Markus



Re: PG16.1 security breach?

2024-06-07 Thread David G. Johnston
On Friday, June 7, 2024, Zwettler Markus (OIZ) 
wrote:

>
> grant usage on schema oiz to public;
>
>
>
> The role is also able to execute the function even I revoke any execute
> privilege explicitly:
>
>
>
> revoke execute on function oiz.f_set_dbowner (p_dbowner text, p_dbname
> text) from testuser;
>
>
You never typed “grant execute … to testuser” nor setup a default privilege
for them, so there is nothing there to revoke.  As was noted, the
combination of your explicit usage grant, and the default execute grant,
given to the public pseudo-role, enables this.

>
>
> There are also no default privileges on the schema:
>
>
You explicitly granted usage to the pseudo-role public…


It is doubtful we’d add a global setting to control this.And it’s a
hard sell changing such a pervasive default.  As most functions are
security invoker, and many are side-effect free, the default does have
merit.  If your function is neither undoing the default is something that
should probably be done.

I could maybe see adding a new “revoke all default privileges from public”
command.

David J.


Re: AW: [Extern] Re: PG16.1 security breach?

2024-06-07 Thread Laurenz Albe
On Fri, 2024-06-07 at 13:54 +, Zwettler Markus (OIZ) wrote:
> > Another point to keep in mind is that by default, execute privilege is 
> > granted to
> > PUBLIC for newly created functions (see Section 5.7 for more information).
> 
> Argh. No! What a bad habit!
> 
> Might be good idea for an enhancement request to create a global parameter to 
> disable this habit.

I don't see the problem, since the default execution mode for functions is
SECURITY INVOKER.

But you can easily change that:

  ALTER DEFAULT PRIVILEGES FOR ROLE function_creator REVOKE EXECUTE ON FUNCTION 
FROM PUBLIC;

Yours,
Laurenz Albe




Re: AW: [Extern] Re: PG16.1 security breach?

2024-06-07 Thread Adrian Klaver

On 6/7/24 06:54, Zwettler Markus (OIZ) wrote:

-Ursprüngliche Nachricht-
Von: Joe Conway 
Gesendet: Freitag, 7. Juni 2024 15:22
An: Zwettler Markus (OIZ) ; pgsql-
gene...@lists.postgresql.org
Betreff: [Extern] Re: PG16.1 security breach?

On 6/7/24 07:04, Zwettler Markus (OIZ) wrote:




Argh. No! What a bad habit!

Might be good idea for an enhancement request to create a global parameter to 
disable this habit.


Read this

https://www.postgresql.org/docs/current/ddl-priv.html

through several times, it will make things clearer. In particular the 
part that starts "PostgreSQL grants privileges on some types of objects 
to PUBLIC by default when the objects are created. ..."




Thanks Markus



--
Adrian Klaver
adrian.kla...@aklaver.com





PG16.1 security breach?

2024-06-07 Thread David G. Johnston
On Friday, June 7, 2024, Laurenz Albe  wrote:

> On Fri, 2024-06-07 at 13:54 +, Zwettler Markus (OIZ) wrote:
> > > Another point to keep in mind is that by default, execute privilege is
> granted to
> > > PUBLIC for newly created functions (see Section 5.7 for more
> information).
> >
> > Argh. No! What a bad habit!
> >
> > Might be good idea for an enhancement request to create a global
> parameter to disable this habit.
>
> I don't see the problem, since the default execution mode for functions is
> SECURITY INVOKER.
>
> But you can easily change that:
>
>   ALTER DEFAULT PRIVILEGES FOR ROLE function_creator REVOKE EXECUTE ON
> FUNCTION FROM PUBLIC;
>


You named function_creator here when in this example the role creating the
new object is postgres.  How is it that the default privilege granted to
public doesn’t seem to care who the object creator is yet when revoking the
grant one supposedly can only do so within the scope of a single role?

David J.


Re: Questions on logical replication

2024-06-07 Thread Adrian Klaver

On 6/6/24 15:19, Koen De Groote wrote:

I'll give them a read, though it might take a few weekends

Meanwhile, this seems to be what I'm looking for:

 From 
https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS 


" Replication slots provide an automated way to ensure that the primary 
does not remove WAL segments until they have been received by all 
standbys, and that the primary does not remove rows which could cause a 
recovery conflict 
 even when the standby is disconnected."


I'm reading that as: "if there is a replication slot, if the standby is 
disconnected, WAL is kept"


And if we know WAL is kept in the "pg_wal" directory, that sounds like 
it could slowly but surely fill up disk space.



But again, I'll give them a read. I've read all of logical replication 
already, and I feel like I didn't get my answer there.


It would be a good idea to provide an a fairly specific outline of what 
you are trying to achieve, then it would be easier for folks to offer 
suggestions on what to do or not to do.




Thanks for the help


Regards,
Koen De Groote


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: PG 14 pg_basebackup accepts --compress=server-zst option

2024-06-07 Thread Tom Lane
Ron Johnson  writes:
> On Fri, Jun 7, 2024 at 12:32 AM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>> I don’t see us adding an error message at this point.

> Me neither.  It just seemed odd.

v14 thinks the argument of --compress must be an integer, and doesn't
really bother with any syntax error checks:

case 'Z':
compresslevel = atoi(optarg);
if (compresslevel < 0 || compresslevel > 9)
{
pg_log_error("invalid compression level \"%s\"", optarg);
exit(1);
}
break;

In your example, atoi() will return zero and it will sail along with
no compression.  Releases 15 and up have more complex ideas of what
--compress can specify, and seem to syntax-check it much more
thoroughly.

This is a pretty common coding pattern, so I can't get excited
about changing it, especially not in long-stable branches.

regards, tom lane