Re: DROP COLLATION vs pg_collation question

2024-06-12 Thread Laurenz Albe
On Tue, 2024-06-11 at 23:15 +0200, Karsten Hilbert wrote:
> maybe a naive question but I was unable to find an answer in
> the fine manual (sv_SE being an example)
> 
> Does running
> 
>   DROP COLLATION IF EXISTS pg_catalog."sv_SE"
> 
> also remove the corresponding row from pg_collation (assuming
> nothing depends on collation sv_SE) ?
> 
> Experimentation seems to hint that way but I was unable to
> confirm.

Yes, that will delete a row from "pg_collation".  System catalogs
like "pg_collation" is where the database metadata are actually
stored.

Note that with DROP COLLATION you can only remove collations
that belong to the encoding of your current database.

Yours,
Laurenz Albe




postgres table statistics

2024-06-12 Thread Chandy G
Hi, 
  We have postgres 13.9 running with tables thats got billions of records of 
varying sizes. Eventhough pg jdbc driver  provides a way to set fetch size to 
tune the driver to achieve better throughput, the JVM fails at the driver level 
when records of large size (say 200mb each) flows through.  this forces to 
reduce the fetch size (if were to operate at a fixed Xmx setting of client jvm).
It get a bit trickier when 100s of such tables exists with varying records 
sizes. trying to see if the fetch size can be set dynamically based on the row 
count and the record size distribution for a table. Unfortunately, trying to 
get this data by a query run against each table (for row size: 
max(length(t::text))) seem to be  quite time consuming too.
Does postgres maintain metadata about tables for the following.1. row count 
2. max row size. 

or is there some other pg metadata that can help get this data quicker.
TIA.





Re: postgres table statistics

2024-06-12 Thread Ron Johnson
On Wed, Jun 12, 2024 at 3:48 AM Chandy G  wrote:

> Hi,
>   We have postgres 13.9 running with tables thats got billions of records
> of varying sizes. Eventhough pg jdbc driver  provides a way to set fetch
> size to tune the driver to achieve better throughput, the JVM fails at the
> driver level when records of large size (say 200mb each) flows through.
> this forces to reduce the fetch size (if were to operate at a fixed Xmx
> setting of client jvm).
>
> It get a bit trickier when 100s of such tables exists with varying records
> sizes. trying to see if the fetch size can be set dynamically based on the
> row count and the record size distribution for a table. Unfortunately,
> trying to get this data by a query run against each table (for row size:
> max(length(t::text))) seem to be  quite time consuming too.
>

Maybe create your own table with three columns:
table_name (PK; taken from pg_class.relname)
average_rec_size (taken from sum(pg_stat.avg_width))
max_rec_size (calculated yourself)

Periodically refresh it.  (How periodic depends on how often the average
and max change substantively.)

Does postgres maintain metadata about tables for the following.
> 1. row count
>

https://www.postgresql.org/docs/13/catalog-pg-class.html

pg_class.reltuples.  This is an estimate, so make sure your tables are
regularly analyzed.


> 2. max row size.
>

https://www.postgresql.org/docs/13/view-pg-stats.html

pg_stats.avg_width


> or is there some other pg metadata that can help get this data quicker.
>
> TIA.
>


Oracle Linux 9 Detected RPMs with RSA/SHA1 signature

2024-06-12 Thread Hans Schou
Hi

On my test server I have Oracle Linux 8.10 installed.
Here I have installed postgresql 16.1 from postgresql.org repository.

Upgrade to Oracle Linux 9:
When doing a »leapp preupgrade --oraclelinux« I get the message below.

I want to have postgresql.org as my repo for PostgreSQL and Oracle Linux
for the rest. But it fails due to this SHA1 signature.

As Oracle Linux 8 since April 2024 now have PostgreSQL 16.1 in the repo I
could just disable the pg-repo and use the ol-repo. But is this the
recommended way to do it?


Output from /var/log/leapp/leapp-report.txt

Risk Factor: high (inhibitor)
Title: Detected RPMs with RSA/SHA1 signature
Summary: Digital signatures using SHA-1 hash algorithm are no longer
considered secure and are not allowed to be used on OL 9 systems by
default. This causes issues when using DNF/RPM to handle packages with
RSA/SHA1 signatures as the signature cannot be checked with the default
cryptographic policy. Any such packages cannot be installed, removed, or
replaced unless the signature check is disabled in dnf/rpm or SHA-1 is
enabled using non-default crypto-policies. For more information see the
following documents:
  - Major changes in OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/relnotes9.4/ol9-NewFeaturesandChanges.html
  - Security Considerations in adopting OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/security/security-ImplementingAdditionalSecurityFeaturesandBestPractices.html#system-crypto-policies
 The list of problematic packages:
- libpq5 (DSA/SHA1, Fri 15 Sep 2023 12:11:13 PM CEST, Key ID
1f16d2e1442df0f8)
- postgresql16 (DSA/SHA1, Mon 20 Nov 2023 10:56:22 AM CET, Key ID
1f16d2e1442df0f8)
- pgdg-redhat-repo (DSA/SHA1, Thu 14 Sep 2023 02:41:37 PM CEST, Key ID
1f16d2e1442df0f8)
- postgresql16-libs (DSA/SHA1, Mon 20 Nov 2023 10:56:22 AM CET, Key ID
1f16d2e1442df0f8)
- postgresql16-contrib (DSA/SHA1, Mon 20 Nov 2023 10:56:23 AM CET, Key
ID 1f16d2e1442df0f8)
- postgresql16-server (DSA/SHA1, Mon 20 Nov 2023 10:56:22 AM CET, Key
ID 1f16d2e1442df0f8)
Related links:
- Major changes in OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/relnotes9.4/ol9-NewFeaturesandChanges.html
- Security Considerations in adopting OL 9:
https://docs.oracle.com/en/operating-systems/oracle-linux/9/security/security-ImplementingAdditionalSecurityFeaturesandBestPractices.html#system-crypto-policies
Remediation: [hint] It is recommended that you contact your package vendor
and ask them for new builds signed with supported signatures and install
the new packages before the upgrade. If this is not possible you may
instead remove the incompatible packages.
Key: f16f40f49c2329a2691c0801b94d31b6b3d4f876

-- 
𝕳𝖆𝖓𝖘 𝕾𝖈𝖍𝖔𝖚
☏ ➁➁ ➅➃ ➇⓪ ➁⓪


Re: Oracle Linux 9 Detected RPMs with RSA/SHA1 signature

2024-06-12 Thread Adrian Klaver

On 6/12/24 02:54, Hans Schou wrote:

Hi

On my test server I have Oracle Linux 8.10 installed.
Here I have installed postgresql 16.1 from postgresql.org 
 repository.


Upgrade to Oracle Linux 9:
When doing a »leapp preupgrade --oraclelinux« I get the message below.

I want to have postgresql.org  as my repo for 
PostgreSQL and Oracle Linux for the rest. But it fails due to this SHA1 
signature.


As Oracle Linux 8 since April 2024 now have PostgreSQL 16.1 in the repo 
I could just disable the pg-repo and use the ol-repo. But is this the 
recommended way to do it?




Take a look at:

https://yum.postgresql.org/news/pgdg-rpm-repo-gpg-key-update/

Also the contact info for the RH packagers:

https://yum.postgresql.org/contact/

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: postgres table statistics

2024-06-12 Thread Shammat


Chandy G schrieb am 12.06.2024 um 09:47:
> Eventhough pg jdbc driver  provides a way to set fetch size to tune
> the driver to achieve better throughput, the JVM fails at the driver
> level when records of large size (say 200mb each) flows through.
> this forces to reduce the fetch size (if were to operate at a fixed
> Xmx setting of client jvm).


Did you try the driver's "adaptive fetch" feature?


https://jdbc.postgresql.org/documentation/use/#connection-parameters





Re: Does trigger only accept functions?

2024-06-12 Thread Isaac Morland
On Tue, 11 Jun 2024 at 18:25, Ron Johnson  wrote:

Since all the functions are going to be similar, I'd write a shell script
> to generate all the triggers, one per relevant.  If you're going to record
> every field, then save effort, and don't bother enumerating them.  You'll
> need to dig into the PG catalog's guts to list columns in the correct
> order, but Google and Stack Exchange makes that easy enough.
>

I'd use a DO block and write a loop in PL/PGSQL. Then everything stays in
Postgres and you have all the support of Postgres when writing your
SQL-writing code (quote_ident, the reg* types, etc.).


Re: Questions on logical replication

2024-06-12 Thread Justin
On Tue, Jun 11, 2024 at 5:43 PM Koen De Groote  wrote:

> > If there are any errors during the replay of WAL such as missing indexes
> for Replica Identities during an Update or Delete  this will cause the main
> subscriber worker slot on the publisher to start backing up WAL files
>
> And also if the connection breaks, from what I understand, is that
> correct? Anything that stops the subscription, including disabling the
> subscription, is that right?
>

Yes to all


> > I suggest confirming all tables have replica identities or primary keys
> before going any further.
>
> Yes, I am aware of this. I made me a small script that prints which tables
> I have added to the publication and are done syncing, and which are
> currently not being replicated.
>


>
> > With PG 11 avoid REPLICA IDENTITY FULL as this causes full table scan on
> the subscriber for PG 15 and earlier.
>
> I'm also aware of this. My plan is to create a publication with no tables,
> and add them 1 by 1, refreshing the subscriber each time.
>

Why?  what benefit does this provide you??   Add all the tables when
creating the publication and be done with it...  I get this when trying to
understand how this all works on test boxes, but for production NO idea
what you're trying to accomplish


> I'm not planning on using "REPLICA IDENTITY FULL" anywhere.
>
Good


Question about UNIX socket connections and SSL

2024-06-12 Thread Casey & Gina
It seems that libpq (maybe?) disables SSL when connecting through a UNIX socket 
to the database.

My setup involves a HA database cluster managed by Patroni.  To route RW or RO 
connections to the correct node(s), we use haproxy, running locally on each 
application node.  In the interest of being as efficient as possible, not using 
TCP unnecessarily, and having the ability to set appropriate permissions on the 
socket files which increases security, we had configured the applications to 
connect to haproxy via local UNIX socket, and then haproxy would of course 
communicate over the network to the database servers via TCP.

More recently, we've started setting up SSL encryption and CA verification for 
all database connections going over the network.  I discovered when working on 
this that SSL was being disabled due to the client connecting to haproxy via 
UNIX socket.  After trying a bunch of things, I resigned to having to use TCP, 
and we changed the connection from the app to haproxy to TCP.

We also have a jump server set up for staff to connect to the database via an 
SSH tunnel.  When this is used, an individual's database connection goes from 
their client over TCP to the jump server via the SSH tunnel, which directs 
their connection to an haproxy instance running there via UNIX socket, which 
then in turn connects to the database using TCP.  Interestingly, even though 
traffic is being routed through a UNIX socket here, SSL encryption *does* work.

So why can't I use SSL when connecting from a client to a UNIX socket?  I can 
understand that verify-full wouldn't work without it, but verify-full doesn't 
work even when using TCP with haproxy, as "localhost" doesn't match the 
database hostname.  For now, I'm only concerned with the verify-ca sslmode.  Is 
there a workaround possible that doesn't involve using TCP unnecessarily?

-- 
Thanks,
- Casey



Re: DROP COLLATION vs pg_collation question

2024-06-12 Thread Karsten Hilbert
> > DROP COLLATION IF EXISTS pg_catalog.""
>
> Yes, that will delete a row from "pg_collation".

Many thanks.

> Note that with DROP COLLATION you can only remove collations
> that belong to the encoding of your current database.

A-ha !  Can that bit be found anywhere in the docs ?

IOW, the following code is exactly useless ?

(because of the "collencoding <> _db_encoding" business ;-)

create function gm.remove_unneeded_collations()
returns void
language plpgsql
security definer
as '
DECLARE
_rec record;
_db_name text;
_db_encoding integer;
BEGIN
SELECT pg_catalog.current_database() INTO _db_name;
SELECT encoding INTO _db_encoding FROM pg_database WHERE 
datname = _db_name;
RAISE NOTICE ''database [%]: removing collations for encodings 
other than the database encoding [%]'', _db_name, 
pg_catalog.pg_encoding_to_char(_db_encoding);
FOR _rec IN (
SELECT oid, collnamespace, collname, collencoding
FROM pg_collation
WHERE
oid > 1000
AND
collencoding IS NOT NULL
AND
collencoding <> -1
AND
collencoding <> _db_encoding
) LOOP
RAISE NOTICE ''dropping collation #% "%.%" (encoding: 
%)'', _rec.oid, _rec.collnamespace::regnamespace, _rec.collname, 
pg_catalog.pg_encoding_to_char(_rec.collencoding);
BEGIN
EXECUTE ''DROP COLLATION IF EXISTS '' || 
_rec.collnamespace::regnamespace || ''."'' || _rec.collname || ''"'';
EXCEPTION
WHEN undefined_object THEN RAISE NOTICE 
''collation does not seem to exist (perhaps for the DB encoding ?)'';
END;
END LOOP;
END;';


The reason for this being the wish to reduce the risk surface
for locale version information changes at the OS level by
removing collations not relevant to a given database.

Thanks,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: Question about UNIX socket connections and SSL

2024-06-12 Thread Tom Lane
Casey & Gina  writes:
> So why can't I use SSL when connecting from a client to a UNIX socket?

(1) It'd add overhead without adding any security.  Data going through
a UNIX socket will only pass through the local kernel, and if that's
compromised then it's game over anyway.

(2) I'm less sure about this part, but I seem to recall that openssl
doesn't actually work if given a UNIX socket.

Maybe there are reasons why those arguments are obsolete, but you
haven't presented any.

regards, tom lane




Re: Question about UNIX socket connections and SSL

2024-06-12 Thread Daniel Gustafsson
> On 12 Jun 2024, at 21:17, Tom Lane  wrote:
> 
> Casey & Gina  writes:
>> So why can't I use SSL when connecting from a client to a UNIX socket?
> 
> (1) It'd add overhead without adding any security.  Data going through
> a UNIX socket will only pass through the local kernel, and if that's
> compromised then it's game over anyway.
> 
> (2) I'm less sure about this part, but I seem to recall that openssl
> doesn't actually work if given a UNIX socket.

That indeed used to be the case, at least until 1.0.2 and possibly 1.1.1, but
AF_UNIX is supported in 3+ IIRC. That being said, I agree with your (1).

--
Daniel Gustafsson





Re: PG16.1 security breach?

2024-06-12 Thread David G. Johnston
On Mon, Jun 10, 2024 at 2:21 AM Laurenz Albe 
wrote:

> > How is it that the default privilege granted to public doesn’t seem to
> care who the object creator
> > is yet when revoking the grant one supposedly can only do so within the
> scope of a single role?
>
> I don't understand what you wrote.  ALTER DEFAULT PRIVILEGES also only
> applies to objects
> created by a single role when you grant default privileges.
>
>
I think my point is that a paragraph like the following may be a useful
addition:

If one wishes to remove the default privilege granted to public to execute
all newly created procedures it is necessary to revoke that privilege for
every superuser in the system as well as any roles that directly have
create permission on a schema and also those that inherit a create
permission on a schema.  Lastly, any new roles created in the future with
direct or indirect create permission on a schema must also be altered.  In
other words, the first time a role creates a routine the default privileges
involved with that creation will including granting execute to public,
unless said default privileges have already been revoked.

Maybe generalized to any of the default privileges.  I find the existing
wording to gloss over the fact that one cannot just decide up front they
want to not allow these default privileges to public once on a system-wide
basis but must continually maintain the default privileges as new roles are
added that are allowed to create different objects, directly or otherwise.

David J.


Definging columns for INSERT statements

2024-06-12 Thread Rich Shepard

I have > 100 rows to add to a table using INSERT INTO statements. I want the
PK to be the next value in the sequence. Would this be the appropriate
syntax for the columns to be entered?

INSERT INTO people (person_nbr 
DEFAULT('people_person_nbr_seq'),lname,fname,job_title DEFAULT 
'Contact',company_nbr,loc_nbr,direct_phone,cell_phone,email,active 
DEFAULT('true')) VALUES
( ...

TIA,

Rich




UPDATE with multiple WHERE conditions

2024-06-12 Thread Rich Shepard

I have a table with 3492 rows. I want to update a boolean column from
'false' to 'true' for 295 rows based on the value of another column.

Is there a way to access a file with those condition values? If not, should
I create a temporary table with one column containing those values, or do I
write a psql script with 295 lines, one for each row to be updated?

TIA,

Rich




Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Muhammad Salahuddin Manzoor
Greetings,

You can use Temporary table. You could create a temporary table with one
column containing the condition values and then use it to update your main
table. This approach can be more flexible and cleaner than writing a script
with multiple update statements.

-- Create a temporary table with one column containing the condition values
CREATE TEMPORARY TABLE temp_conditions (condition_value TEXT);

-- Insert the condition values into the temporary table
INSERT INTO temp_conditions (condition_value) VALUES
('value1'),
('value2'),
('value3'),
-- Add more values as needed...
('value295');

-- Update the boolean column based on the condition values
UPDATE your_table
SET boolean_column = true
WHERE condition_column IN (SELECT condition_value FROM temp_conditions);

-- Clean up: drop the temporary table
DROP TABLE IF EXISTS temp_conditions;

*Salahuddin (살라후딘**)*


On Thu, 13 Jun 2024 at 02:28, Rich Shepard  wrote:

> I have a table with 3492 rows. I want to update a boolean column from
> 'false' to 'true' for 295 rows based on the value of another column.
>
> Is there a way to access a file with those condition values? If not, should
> I create a temporary table with one column containing those values, or do I
> write a psql script with 295 lines, one for each row to be updated?
>
> TIA,
>
> Rich
>
>
>


Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Rich Shepard

On Thu, 13 Jun 2024, Muhammad Salahuddin Manzoor wrote:


You can use Temporary table. You could create a temporary table with one
column containing the condition values and then use it to update your main
table. This approach can be more flexible and cleaner than writing a
script with multiple update statements.


Salahuddin,

Thank you. I thought this would be the best approach.

Regards,

Rich




Re: Defining columns for INSERT statements

2024-06-12 Thread Rich Shepard

On Wed, 12 Jun 2024, Rich Shepard wrote:


VALUES (nextval('people_person_nbr_seq'), ...


Correction.

Rich






Re: PG16.1 security breach?

2024-06-12 Thread Ron Johnson
On Wed, Jun 12, 2024 at 4:36 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Jun 10, 2024 at 2:21 AM Laurenz Albe 
> wrote:
>
>> > How is it that the default privilege granted to public doesn’t seem to
>> care who the object creator
>> > is yet when revoking the grant one supposedly can only do so within the
>> scope of a single role?
>>
>> I don't understand what you wrote.  ALTER DEFAULT PRIVILEGES also only
>> applies to objects
>> created by a single role when you grant default privileges.
>>
>>
> I think my point is that a paragraph like the following may be a useful
> addition:
>
> If one wishes to remove the default privilege granted to public to execute
> all newly created procedures it is necessary to revoke that privilege for
> every superuser in the system
>

That seems... excessive.  You can revoke other privs from public (can't
you?), so why seemingly only do procedures/functions have this difficulty.


Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Ron Johnson
On Wed, Jun 12, 2024 at 5:28 PM Rich Shepard 
wrote:

> I have a table with 3492 rows. I want to update a boolean column from
> 'false' to 'true' for 295 rows based on the value of another column.
>
> Is there a way to access a file with those condition values? If not, should
> I create a temporary table with one column containing those values, or do I
> write a psql script with 295 lines, one for each row to be updated?
>

A plain UPDATE might work.

UPDATE to_be_updated a
SET bool_col = true
FROM other_table b
WHERE a.pk = b.pk
  AND b.field3 = mumble;

(You can join them, right?)


Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Rob Sargent



On 6/12/24 15:48, Ron Johnson wrote:
On Wed, Jun 12, 2024 at 5:28 PM Rich Shepard 
 wrote:


I have a table with 3492 rows. I want to update a boolean column from
'false' to 'true' for 295 rows based on the value of another column.

Is there a way to access a file with those condition values? If
not, should
I create a temporary table with one column containing those
values, or do I
write a psql script with 295 lines, one for each row to be updated?


A plain UPDATE might work.
UPDATE to_be_updated a
SET bool_col = true
FROM other_table b
WHERE a.pk  = b.pk 
  AND b.field3 = mumble;

(You can join them, right?)
Add "begin;" to that and try it.  If you don't get exactly UPDATE 295 
reported, then "rollback;";


Re: PG16.1 security breach?

2024-06-12 Thread David G. Johnston
On Wed, Jun 12, 2024 at 2:37 PM Ron Johnson  wrote:

> On Wed, Jun 12, 2024 at 4:36 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Mon, Jun 10, 2024 at 2:21 AM Laurenz Albe 
>> wrote:
>>
>>> > How is it that the default privilege granted to public doesn’t seem to
>>> care who the object creator
>>> > is yet when revoking the grant one supposedly can only do so within
>>> the scope of a single role?
>>>
>>> I don't understand what you wrote.  ALTER DEFAULT PRIVILEGES also only
>>> applies to objects
>>> created by a single role when you grant default privileges.
>>>
>>>
>> I think my point is that a paragraph like the following may be a useful
>> addition:
>>
>> If one wishes to remove the default privilege granted to public to
>> execute all newly created procedures it is necessary to revoke that
>> privilege for every superuser in the system
>>
>
> That seems... excessive.  You can revoke other privs from public (can't
> you?), so why seemingly only do procedures/functions have this difficulty.
>
>
Neither domain, language, nor type seem problematic.  Which just leave
connect and temp on databases which indeed have a similar issue but also
the number of roles with createdb is likely significantly fewer than those
with create on schema.

David J.


Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread David G. Johnston
On Wed, Jun 12, 2024 at 2:28 PM Rich Shepard 
wrote:

> I have a table with 3492 rows. I want to update a boolean column from
> 'false' to 'true' for 295 rows based on the value of another column.
>
> Is there a way to access a file with those condition values?
>

I'll often just use a spreadsheet to build the 295 update commands and
copy-paste them into psql or whatnot.

David J.


Re: Definging columns for INSERT statements

2024-06-12 Thread David G. Johnston
On Wed, Jun 12, 2024 at 2:11 PM Rich Shepard 
wrote:

> I have > 100 rows to add to a table using INSERT INTO statements. I want
> the
> PK to be the next value in the sequence. Would this be the appropriate
> syntax for the columns to be entered?
>

The whole point of the server is to parse text and tell you if it has
syntax errors and then, if not, execute what you gave it.


> INSERT INTO people (person_nbr
> DEFAULT('people_person_nbr_seq'),lname,fname,job_title DEFAULT
> 'Contact',company_nbr,loc_nbr,direct_phone,cell_phone,email,active
> DEFAULT('true')) VALUES
>

https://www.postgresql.org/docs/current/sql-insert.html

Not sure where you got the idea that something besides a plain column name
can appear in the optional parentheses after the table name.

David J.


Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Rich Shepard

On Wed, 12 Jun 2024, David G. Johnston wrote:


I'll often just use a spreadsheet to build the 295 update commands and
copy-paste them into psql or whatnot.


David,

I'll create and use a temporary table.

Thanks,

Rich




Re: Definging columns for INSERT statements

2024-06-12 Thread Rich Shepard

On Wed, 12 Jun 2024, David G. Johnston wrote:


INSERT INTO people (person_nbr
DEFAULT('people_person_nbr_seq'),lname,fname,job_title DEFAULT
'Contact',company_nbr,loc_nbr,direct_phone,cell_phone,email,active
DEFAULT('true')) VALUES



https://www.postgresql.org/docs/current/sql-insert.html

Not sure where you got the idea that something besides a plain column name
can appear in the optional parentheses after the table name.


David,

I've not used nextval() before which is why I asked.

Thanks,

Rich




Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Rich Shepard

On Wed, 12 Jun 2024, Ron Johnson wrote:


A plain UPDATE might work.

UPDATE to_be_updated a
SET bool_col = true
FROM other_table b
WHERE a.pk = b.pk
 AND b.field3 = mumble;

(You can join them, right?)


Thanks, Ron.

Rich




Re: UPDATE with multiple WHERE conditions

2024-06-12 Thread Rich Shepard

On Wed, 12 Jun 2024, Rob Sargent wrote:


Add "begin;" to that and try it.  If you don't get exactly UPDATE 295
reported, then "rollback;";


Got it, thanks.

Rich




Re: PG16.1 security breach?

2024-06-12 Thread Tom Lane
Ron Johnson  writes:
> On Wed, Jun 12, 2024 at 4:36 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>> I think my point is that a paragraph like the following may be a useful
>> addition:
>> 
>> If one wishes to remove the default privilege granted to public to execute
>> all newly created procedures it is necessary to revoke that privilege for
>> every superuser in the system

> That seems... excessive.

More to the point, it's wrong.  Superusers have every privilege there
is "ex officio"; we don't even bother to look at the catalog entries
when considering a privilege check for a superuser.  Revoking their
privileges will accomplish nothing, and it does nothing about the
actual source of the problem (the default grant to PUBLIC) either.

What I'd do if I didn't like this policy is some variant of

ALTER DEFAULT PRIVILEGES IN SCHEMA public
  REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;

Repeat for each schema that you think might be publicly readable
(which is only public by default).

BTW, in PG 15 and up, the public schema is not writable by
default, which attacks basically the same problem from a different
direction.

regards, tom lane




Re: Definging columns for INSERT statements

2024-06-12 Thread Adrian Klaver

On 6/12/24 14:11, Rich Shepard wrote:
I have > 100 rows to add to a table using INSERT INTO statements. I want 
the

PK to be the next value in the sequence. Would this be the appropriate
syntax for the columns to be entered?

INSERT INTO people (person_nbr 
DEFAULT('people_person_nbr_seq'),lname,fname,job_title DEFAULT 
'Contact',company_nbr,loc_nbr,direct_phone,cell_phone,email,active 
DEFAULT('true')) VALUES

( ...


Assuming 'people_person_nbr_seq' is the sequence attached to person_nbr 
and the other DEFAULTs are the column  defaults then the syntax would be:


INSERT INTO people
(person_nbr, lname, fname, job_title, company_nbr,loc_nbr, direct_phone, 
cell_phone,email, active)

VALUES
(DEFAULT, 'klaver', 'adrian', DEFAULT, 1, 1, '555-1234', '555-4321', 
'adrian.kla...@aklaver.com', DEFAULT);


If they are not the column defaults then just supply the literal value.



TIA,

Rich




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: PG16.1 security breach?

2024-06-12 Thread David G. Johnston
On Wed, Jun 12, 2024 at 3:57 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > On Wed, Jun 12, 2024 at 4:36 PM David G. Johnston <
> > david.g.johns...@gmail.com> wrote:
> >> I think my point is that a paragraph like the following may be a useful
> >> addition:
> >>
> >> If one wishes to remove the default privilege granted to public to
> execute
> >> all newly created procedures it is necessary to revoke that privilege
> for
> >> every superuser in the system
>
> > That seems... excessive.
>
> More to the point, it's wrong.  Superusers have every privilege there
> is "ex officio"; we don't even bother to look at the catalog entries
> when considering a privilege check for a superuser.  Revoking their
> privileges will accomplish nothing, and it does nothing about the
> actual source of the problem (the default grant to PUBLIC) either.
>

Apparently my forgetting the word "default" in front of privilege makes a
big difference in understanding/meaning.

Alter Default Privileges FOR postgres Revoke Execute on Functions From
PUBLIC;

That is what I meant, I was wrong in that I wrote permission instead of "d
If one wishes to remove the default privilege granted to public to execute
all newly created procedures it is necessary to revoke that [default]
privilege for
every superuser in the system.

The FOR postgres part is inferred, it matches the current role if omitted.

If I now create (or even if there already existed) a new superuser named
davidj and they create a function, the public pseudo-role will be able to
execute that function.  You would first need to execute the above command,
substituting davidj for postgres, if you want to prevent that.

David J.


Re: Definging columns for INSERT statements

2024-06-12 Thread Rich Shepard

On Wed, 12 Jun 2024, Adrian Klaver wrote:


Assuming 'people_person_nbr_seq' is the sequence attached to person_nbr
and the other DEFAULTs are the column defaults then the syntax would be:

INSERT INTO people
(person_nbr, lname, fname, job_title, company_nbr,loc_nbr, direct_phone, 
cell_phone,email, active)

VALUES
(DEFAULT, 'klaver', 'adrian', DEFAULT, 1, 1, '555-1234', '555-4321', 
'adrian.kla...@aklaver.com', DEFAULT);


If they are not the column defaults then just supply the literal value.


Adrian,

No matter how many postgres docs and web pages I read I didn't find an
example as explicit as yours. Columns other than the person_nbr were not
defined with defaults and I expected to enter them for each row.

Many thanks for another valuable lesson.

Best regards,

Rich




Re: Definging columns for INSERT statements

2024-06-12 Thread Adrian Klaver

On 6/12/24 16:24, Rich Shepard wrote:

On Wed, 12 Jun 2024, Adrian Klaver wrote:


Assuming 'people_person_nbr_seq' is the sequence attached to person_nbr
and the other DEFAULTs are the column defaults then the syntax would be:

INSERT INTO people
(person_nbr, lname, fname, job_title, company_nbr,loc_nbr, 
direct_phone, cell_phone,email, active)

VALUES
(DEFAULT, 'klaver', 'adrian', DEFAULT, 1, 1, '555-1234', '555-4321', 
'adrian.kla...@aklaver.com', DEFAULT);


If they are not the column defaults then just supply the literal value.


Adrian,

No matter how many postgres docs and web pages I read I didn't find an
example as explicit as yours. Columns other than the person_nbr were not
defined with defaults and I expected to enter them for each row.


The 'Examples' section at the bottom of this page:

https://www.postgresql.org/docs/current/sql-insert.html

is a useful resource.



Many thanks for another valuable lesson.

Best regards,

Rich




--
Adrian Klaver
adrian.kla...@aklaver.com





Is NVMe RAID useless (performance-wise) with PostgreSQL?

2024-06-12 Thread Dragan Milivojević

Hi all,

While building a new PostgreSQL server, I realized that the performance with a 
single disk
is the same or better than with a RAID0 4-disk array.

All benchmarks were conducted using pgbench with a scaling factor of 2000.
For a typical run with pgbench -j 4 -c 512 -P 60 -r -T 300 -b tpcb-like, these 
are the results:


single disk run:

latency average = 17.524 ms
latency stddev = 6.904 ms
tps = 28870


iostat:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  26.10    0.00   21.02    2.95    0.00   49.93

| Device  |    r/s   |  rMB/s | rrqm/s | %rrqm | r_await | rareq-sz |    w/s   
|  wMB/s | wrqm/s | %wrqm | w_await | wareq-sz |   f/s  | f_await | aqu-sz | 
%util |
|-|::|:--:|:--:|:-:|:---:|::|::|:--:|:--:|:-:|:---:|::|:--:|:---:|:--:|:-:|
| nvme0n1 | 28641.27 | 255.00 |   0.00 |  0.00 |    0.16 | 9.12 | 27665.67 | 
458.09 |   0.00 |  0.00 |    0.09 |    16.96 | 251.47 |    1.69 |   7.69 | 
98.08 |



RAID0 4 disk, 4K chunk:

latency average = 22.269 ms
latency stddev = 10.825 ms
tps = 22742


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  23.63    0.00   19.63    1.53    0.00   55.21

| Device  |    r/s   |  rMB/s |  rrqm/s | %rrqm | r_await | rareq-sz |    w/s   
 |  wMB/s | wrqm/s | %wrqm | w_await | wareq-sz |   f/s  | f_await | aqu-sz | 
%util |
|-|::|:--:|:---:|:-:|:---:|::|:-:|:--:|:--:|:-:|:---:|::|:--:|:---:|:--:|:-:|
| md127   | 55359.93 | 216.25 |    0.00 |  0.00 |    0.09 | 4.00 | 105629.07 | 
412.61 |   0.00 |  0.00 |    0.04 | 4.00 |   0.00 |    0.00 |   9.02 | 
93.76 |
| nvme1n1 | 12763.33 |  54.03 | 1067.47 |  7.72 |    0.08 | 4.33 |  26572.07 | 
103.31 |  37.33 |  0.14 |    0.05 | 3.98 | 162.53 |    1.74 |   2.67 | 
99.18 |
| nvme3n1 | 12753.07 |  53.97 | 1063.87 |  7.70 |    0.08 | 4.33 |  26560.47 | 
103.26 |  37.40 |  0.14 |    0.05 | 3.98 | 162.47 |    1.73 |   2.58 | 
99.15 |
| nvme4n1 | 12787.27 |  54.10 | 1062.80 |  7.67 |    0.09 | 4.33 |  26492.73 | 
102.99 |  35.67 |  0.13 |    0.05 | 3.98 | 162.53 |    1.69 |   2.67 | 
99.07 |
| nvme5n1 | 12796.53 |  54.15 | 1065.60 |  7.69 |    0.09 | 4.33 |  26505.67 | 
103.04 |  35.73 |  0.13 |    0.05 | 3.98 | 162.53 |    1.66 |   2.56 | 
98.95 |

BTW, if these tables are mangled in transport or by email clients, I posted 
this email to https://pastebin.com/raw/ZmsH0T5M.

A 4K chunk is obviously not optimal, but I should still be getting around a 2x 
uplift.

In the past, when tuning PostgreSQL, I tweaked various RAID parameters like 
chunk size, stripe_cache_size, etc.
but it never occurred to me to check the performance against a single drive. 
Hence this email.
I'm not sure if this is expected or if there is something wrong with my setup.
Full system details are at the end of the message.

While exploring this, I went deep down the rabbit hole, running hundreds of 
tests and trying dozens of configurations.
The best I achieved was 31K TPS with a plain RAID0 256KB chunk, nvme 
poll_queues=4 and io_pool=1.
This resulted in a measly 2% improvement compared to a single disk.


A sample of results:

Single disk nvme poll_queues=0
pgbench -j4 -c X    tps    avg latency ms  latency stddev ms
1   477 2.096 0.258
4   1167    3.426 0.295
16  4408    3.623 0.545
64  12533   5.089 0.999
128 21295   5.979 1.538
256 28022   9.048 3.014
512 28870  17.524 6.904


Single disk nvme poll_queues=4
pgbench -j4 -c X    tps    avg latency ms  latency stddev ms
128 22284   5.711   1.448
256 27390   9.240   2.848
512 30596   16.452  6.090
1024    26352   38.481  19.513


4 disk RAID 0 4KB Chunk nvme poll_queues=0
pgbench -j4 -c X    tps avg latency ms  latency stddev ms
128 17614   7.231   2.43
256 22347   11.37   4.922
512 22742   22.269  10.825
1024    20896   48.57   26.975


4 disk LVM RAID 0 4KB Chunk nvme poll_queues=4
pgbench -j4 -c X    tps avg latency ms  latency stddev ms
128 17423   7.312   1.991
256 22064  11.521   4.044
512 24875  20.373   9.421
1024    21242  47.692   25.843


4 disk RAID 0 8KB Chunk nvme poll_queues=4
pgbench -j4 -c X    tps avg latency ms  latency stddev ms
128 18907   6.736   2.094
256 24909   10.184  3.446
512 24878   20.331  8.448
1024    

"permission denied to COPY to or from an external program" even with GRANT pg_execute_server_program

2024-06-12 Thread Chema
Dear Postgreezers,

been banging my head against this one for a couple days.  Googling and
StackExchange were just as useful, so you're my last hope.  I've been
unable to get a non-admin user to run Copy From Program even after granting
pg_execute_server_program, and everything else I could think of.  It always
fails with ERROR: permission denied to COPY to or from an external program.

I'll let the code speak by itself.  Here's a minimal example that I've
tried in the last official Docker image:

-- Spin a temporal Pg and connect to psql

--docker run --name pg16 -e POSTGRES_PASSWORD=qwer -d postgres:16

--docker exec -ti pg16 psql -U postgres

CREATE TABLE testtable (

id int NOT NULL GENERATED ALWAYS AS IDENTITY,

name text NOT NULL

);

Create Role justintestin noinherit login password 'qwer';

-- Necessary privileges

GRANT CONNECT ON DATABASE postgres TO justintestin;

GRANT USAGE ON SCHEMA public TO justintestin;

GRANT ALL ON ALL TABLES IN SCHEMA public TO justintestin;

-- Apply them to new tables/views created by admin account

ALTER DEFAULT IN SCHEMA public GRANT ALL ON TABLES TO justintestin;

-- Allow Copy From Program... or try to anyway

GRANT pg_execute_server_program TO justintestin;

-- Tests

GRANT ALL ON testtable TO justintestin;

GRANT ALL ON SCHEMA public TO justintestin;

GRANT ALL ON DATABASE postgres to justintestin;

GRANT pg_read_all_data TO justintestin;

GRANT pg_write_all_data TO justintestin;

--Copy works with admin account

Copy testtable(name) From Program 'echo "Buffa Testata"' CSV;

-- COPY 1


--But fails with justintestin

SET role justintestin;

Copy testtable(name) From Program 'echo "Errato Denegato"' CSV;

--SQL Error [42501]: ERROR: permission denied to COPY to or from an
external program

-- Detail: Only roles with privileges of the "pg_execute_server_program"
role may COPY to or from an external program.


--Even tho he's privileged

SELECT rolname FROM pg_roles WHERE

pg_has_role(current_user, oid, 'member');

-- rolname

---

-- pg_read_all_data

-- pg_write_all_data

-- pg_execute_server_program

-- justintestin


--Insert works

Insert Into testtable ("name") VALUES('Pazzo Intestinato');

--INSERT 0 1

Select * From testtable;

SELECT current_user, session_user;


-- Clean up for new test

SET role postgres;

Drop Table testtable;

Drop Owned By justintestin;

Drop Role justintestin;


What am I missing? (besides a few chunks of hair)


Re: "permission denied to COPY to or from an external program" even with GRANT pg_execute_server_program

2024-06-12 Thread David G. Johnston
On Wednesday, June 12, 2024, Chema  wrote:

>
> Create Role justintestin noinherit login password 'qwer';
>
>
> GRANT pg_execute_server_program TO justintestin;
>
>
>
Pretty sure since you choose not to allow justintestin to inherit stuff you
will need to issue a “set role to pg_execute_server_program” before you
attempt the copy command.

David J.


Re: "permission denied to COPY to or from an external program" even with GRANT pg_execute_server_program

2024-06-12 Thread Tom Lane
Chema  writes:
> been banging my head against this one for a couple days.  Googling and
> StackExchange were just as useful, so you're my last hope.  I've been
> unable to get a non-admin user to run Copy From Program even after granting
> pg_execute_server_program, and everything else I could think of.  It always
> fails with ERROR: permission denied to COPY to or from an external program.

Works for me:

regression=# create user joe;
CREATE ROLE
regression=# \c - joe
You are now connected to database "regression" as user "joe".
regression=> create table jt (t1 text);
CREATE TABLE
regression=> copy jt From Program 'echo "Buffa Testata"' CSV;
ERROR:  permission denied to COPY to or from an external program
DETAIL:  Only roles with privileges of the "pg_execute_server_program" role may 
COPY to or from an external program.
HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also works 
for anyone.
regression=> \c - postgres
You are now connected to database "regression" as user "postgres".
regression=# GRANT pg_execute_server_program TO joe;
GRANT ROLE
regression=# \c - joe
You are now connected to database "regression" as user "joe".
regression=> copy jt From Program 'echo "Buffa Testata"' CSV;
COPY 1

What PG version are you working with?

regards, tom lane




Re: "permission denied to COPY to or from an external program" even with GRANT pg_execute_server_program

2024-06-12 Thread Chema
>
>
> Pretty sure since you choose not to allow justintestin to inherit stuff
> you will need to issue a “set role to pg_execute_server_program” before you
> attempt the copy command.
>
> David J.
>

That was it!  Blind paranoia bites my rear again.  Thanks!