Re: pg_dump and public schema

2020-03-04 Thread Paul Foerster
Hi,

I think "create database" always creates the "public" schema. So, all
is well. All you have to do is drop it after running the dump.sql
script.

Cheers,
Paul

On Wed, Mar 4, 2020 at 2:43 PM Олег Самойлов  wrote:
>
> Hi all. PostgresQL 12.1. Strange behaviour with pg_dump and absent public 
> schema.
>
> I droped public schema and I work under "username" schema.
>
> => \dn
> List of schemas
>  Name  | Owner
> ---+---
>  olleg | olleg
> (1 row)
>
> Dump now
>
> pg_dump -U postgres -C olleg >dump.sql
>
> --
> -- PostgreSQL database dump
> --
>
> -- Dumped from database version 12.1
> -- Dumped by pg_dump version 12.1
>
> SET statement_timeout = 0;
> SET lock_timeout = 0;
> SET idle_in_transaction_session_timeout = 0;
> SET client_encoding = 'UTF8';
> SET standard_conforming_strings = on;
> SELECT pg_catalog.set_config('search_path', '', false);
> SET check_function_bodies = false;
> SET xmloption = content;
> SET client_min_messages = warning;
> SET row_security = off;
>
> --
> -- Name: olleg; Type: DATABASE; Schema: -; Owner: olleg
> --
>
> CREATE DATABASE olleg WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE 
> = 'ru_RU.UTF-8' LC_CTYPE = 'ru_RU.UTF-8';
>
>
> ALTER DATABASE olleg OWNER TO olleg;
>
> \connect olleg
>
> SET statement_timeout = 0;
> SET lock_timeout = 0;
> SET idle_in_transaction_session_timeout = 0;
> SET client_encoding = 'UTF8';
> SET standard_conforming_strings = on;
> SELECT pg_catalog.set_config('search_path', '', false);
> SET check_function_bodies = false;
> SET xmloption = content;
> SET client_min_messages = warning;
> SET row_security = off;
>
> --
> -- Name: olleg; Type: SCHEMA; Schema: -; Owner: olleg
> --
>
> CREATE SCHEMA olleg;
>
>
> ALTER SCHEMA olleg OWNER TO olleg;
>
> --
> -- PostgreSQL database dump complete
> --
>
> recreate DB from the dump:
>
> psql postgres postgres -f dump.sql
>
> And now I see public schema, which must be absent.
>
> psql olleg olleg
>
> => \dn
>   List of schemas
>   Name  |  Owner
> +--
>  olleg  | olleg
>  public | postgres
> (2 rows)
>




Re: select * from test where name like 'co_%'

2020-03-10 Thread Paul Foerster
Hi,

an underscore matches a single character, any character. You'd have to
escape it and tell the query what the escape character is if you want
it to be treated as a standard character:

db=# create table t(t text);
CREATE TABLE
db=# insert into t(t) values ('fox'), ('fo_'), ('fo_x');
INSERT 0 3
db=# select * from t;
  t
--
 fox
 fo_
 fo_x
(3 rows)

db=# select * from t where t like 'fo_%';
  t
--
 fox
 fo_
 fo_x
(3 rows)

db=# select * from t where t like 'fo\_%' escape '\';
  t
--
 fo_
 fo_x
(2 rows)

Cheers,
Paul

On Tue, Mar 10, 2020 at 1:49 PM sivapostg...@yahoo.com
 wrote:
>
> Hello,
>
> What returns when I run a query like this;
>
> Select * from test where name like 'co_%';
>
> I expect anything that starts with 'co_' and NOT 'co' only.  Am I right?  But 
> I get every names that starts with 'co'. Why ?
>
> Happiness Always
> BKR Sivaprakash
>




Re: Automatic failover

2020-03-17 Thread Paul Foerster
Hi Sonam,

On Tue, Mar 17, 2020 at 11:30 AM Sonam Sharma  wrote:
>
> I have setup replication using repmgr. Wanted to know how much time the slave 
> node will take to become new primary ?? If any document, can someone please 
> share of auto failover.  With automatic failover, how much time the slave 
> takes to become new primary .
> Thanks..

I don't know about repmgr. We use Patroni with etcd. A switchover
takes only a second or two with that. And the (new) replica is usually
in sync again in less than about 5 seconds or so.

Cheers,
Paul




Re: glibc updarte 2.31 to 2.38

2024-09-21 Thread Paul Foerster
Hi Peter,

> On 21 Sep 2024, at 00:33, Peter J. Holzer  wrote:
> 
> I don't use SLES but I would expect it to have an RPM for it.
> 
> If you have any test machine which you can upgrade before the production
> servers (and given the amount of data and availability requirements you
> have, I really hope you do) you should be set.

One of our admins did me a favor and upgraded my build server ahead of 
schedule. So I can both test our current PostgreSQL version as well as rebuild 
it if necessary.

I can't test all of our data. That'd take quite a few months or more. I just 
can try to identify some crucial databases and columns. When those tests are 
done, I can only pray and hope for the best.

I already expressed the idea of changing all locales to ICU. The problem there 
is that I'd have to create new instances and then move each database 
individually. I wish I could convert already running databases… This also takes 
time. Still, I think I'm going to try this route. It's always a gamble if 
reindexing is needed or not with any glibc change.

Cheers,
Paul



Re: glibc updarte 2.31 to 2.38

2024-09-25 Thread Paul Foerster
Hi Adrian,

> On 22 Sep 2024, at 18:53, Adrian Klaver  wrote:
> 
> https://www.postgresql.org/docs/17/release-17.html#RELEASE-17-HIGHLIGHTS
> 
> Add a builtin platform-independent collation provider (Jeff Davis)
> 
> This supports C and C.UTF-8 collations.

I must admit that I haven't read the readme fully yet, but this is definitely 
great news. Thanks very much.

Cheers,
Paul



Re: glibc updarte 2.31 to 2.38

2024-09-19 Thread Paul Foerster
Hi Adrian,

> On 19 Sep 2024, at 17:00, Adrian Klaver  wrote:
> 
> I would take a look at:
> 
> https://wiki.postgresql.org/wiki/Locale_data_changes
> 
> It refers to the glibc 2.8 change in particular, but includes some generic 
> tips that could prove useful.
> 
> 
> The glibc change log below might also be useful:
> 
> https://sourceware.org/glibc/wiki/Release

I've seen those before but since the article only refers to 2.28 and SUSE 15.3, 
and I couldn't find anything in the glibc release notes, I thought I'd ask.

Cheers,
Paul





Re: glibc updarte 2.31 to 2.38

2024-09-19 Thread Paul Foerster
Hi Tom,

> On 19 Sep 2024, at 17:14, Tom Lane  wrote:
> 
> No, I wouldn't expect that to be necessary.

I was hoping one of the pros would say that. 🤣

> Maybe.  We don't really track glibc changes, so I can't say for sure,
> but it might be advisable to reindex indexes on string columns.

Advisable is a word I undfortunately can't do much with. We have terabytes and 
terabytes of data in hundreds of databases each having potentially hundreds of 
columns that are candidates. Just reindexing and taking down applications 
during that time is not an option in a 24x7 high availability environment.

Cheer,
Paul





glibc updarte 2.31 to 2.38

2024-09-19 Thread Paul Foerster
Hi,

we have SLES 15.5 which has glibc 2.31. Our admin told us that he's about to 
install the SLES 15.6 update which contains glibc 2.38.

I have built our PostgreSQL software from source on SLES 15.5, because we have 
some special requirements which the packages cannot fulfill. So I have 
questions:

1) Do I have to build it again on 15.6?

2) Does the glibc update have any impact? I recall having to have everything 
reindexed when the 2.28 update came due to major locale changes, but I didn't 
have to do it since then.

3) Where and how can I find out if it is necessary to reindex? And how can I 
find out what indexes would be affected.

I'd really appreciate your comments. Thanks very much in advance.

Paul



Re: glibc updarte 2.31 to 2.38

2024-09-19 Thread Paul Foerster
Hi Joe,

> On 19 Sep 2024, at 19:07, Joe Conway  wrote:
> 
> Every glibc major version change potentially impacts the sorting of some 
> strings, which would require reindexing. Whether your actual data trips into 
> any of these changes is another matter.
> 
> You could check by doing something equivalent to this on every collatable 
> column with an index built on it, in every table:
> 
> 8<---
> WITH t(s) AS (SELECT  FROM  ORDER BY 1)
> SELECT md5(string_agg(t.s, NULL)) FROM t;
> 8<---
> 
> Check the before and after glibc upgrade result -- if it is the same, you are 
> good to go. If not, rebuild the index before *any* DML is done to the table.

I like the neatness of this one. I think about how to implement this on 
hundreds of of databases with hundreds of columns. That'll be a challenge, but 
at least it's a start.

Thanks very much for this one.

Cheers,
Paul



Re: glibc updarte 2.31 to 2.38

2024-09-19 Thread Paul Foerster
Hi Peter,

> On 19 Sep 2024, at 19:43, Peter J. Holzer  wrote:
> 
> I wrote a small script[1] which prints all unicode code points and a few
> selected[2] longer strings in order. If you run that before and after
> the upgrade and the output doesn't change, you are probably be fine.
> (It checks only the default collation, though: If you have indexes using
> a different collation you would have to modify the script accordingly.)
> 
> If there are differences, closer inspection might show that the changes
> don't affect you. But I would reindex all indexes on text (etc.) columns
> just to be sure.
> 
>hp
> 
> [1] https://git.hjp.at:3000/hjp/pgcollate
> [2] The selection is highly subjective and totally unscientific.
>Additions are welcome.

I'm not a Python specialist but I take it that the script need psycopg2, which 
we probably don't have. So I'd have to build some sort of venv around that like 
I had to do to get Patroni working on our systems.

Well, we'll see.

Thanks for this script.

Cheers,
Paul





Re: glibc updarte 2.31 to 2.38

2024-09-19 Thread Paul Foerster
Hi Joe,

> On 19 Sep 2024, at 20:09, Joe Conway  wrote:
> 
> See my thread-adjacent email, but suffice to say that if there are collation 
> differences that do affect your tables/data, and you allow any inserts or 
> updates, you may wind up with corrupted data (e.g. duplicate data in your 
> otherwise unique indexes/primary keys).

Yes, I know that.

> For more examples about that see 
> https://joeconway.com/presentations/glibc-SCaLE21x-2024.pdf

A very interesting PDF. Thanks very much.

> An potential alternative for you (discussed at the end of that presentation) 
> would be to create a new branch based on your original SLES 15.5 glibc RPM 
> equivalent to this:
> 
> https://github.com/awslabs/compat-collation-for-glibc/tree/2.17-326.el7
> 
> The is likely a non trivial amount of work involved (the port from the AL2 
> rpm to the RHEL7 rpm took me the better part of a couple of days), but once 
> done your collation is frozen to the specific version you had on 15.5.

I'm not a developer. I have one machine which is equivalent to all other 
servers except that it has gcc, make and some other things for me to build 
PostgreSQL. I can't make the admins run a rpm on all servers. I can obviously 
put a library into the /path/2/postgres/software/lib64 directory but not into 
the system.

Also, my build server does not have internet access. So things like git clone 
would be an additional show stopper. Unfortunately, I'm pretty limited.

Cheers,
Paul





Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-28 Thread Paul Foerster
Hi Tom, hi Alvaro,

> On 27 Nov 2024, at 19:52, Tom Lane  wrote:
> 
> Okay, so I was able to reproduce this from scratch on HEAD:

great, thanks.

> I doubt that there's anything actually wrong with the catalog state at
> this point (perhaps Alvaro would confirm that).  That leads to the
> conclusion that what's wrong is the release notes' query for fingering
> broken constraints, and it needs some additional test to avoid
> complaining about (I suspect) self-reference cases.

In the meantime, I updated the whole company. The one test database actually 
was the only database that this was returned. I found no other occurrences.

As I understand it, the worst thing that could happen is that one or more rows 
end up in a detached partition table which should actually be in another 
partition, right? Since there were no rows, no harm could have been done. Also, 
since this is a self reference, the wrong table is also the right one.

Again, thanks very much for clarifying this.

Cheers
Paul



Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-29 Thread Paul Foerster
Hi Alvaro,

> On 29 Nov 2024, at 18:15, Alvaro Herrera  wrote:
> 
> This all was to say that the query in the release notes is undoubtedly
> wrong.  After thinking some more about it, I think the fix is to add 1
> to the number of constraints:
> 
> SELECT conrelid::pg_catalog.regclass AS "constrained table",
>   conname AS constraint,
>   confrelid::pg_catalog.regclass AS "references",
>   pg_catalog.format('ALTER TABLE %s DROP CONSTRAINT %I;',
> conrelid::pg_catalog.regclass, conname) AS "drop",
>   pg_catalog.format('ALTER TABLE %s ADD CONSTRAINT %I %s;',
> conrelid::pg_catalog.regclass, conname,
> pg_catalog.pg_get_constraintdef(oid)) AS "add"
> FROM pg_catalog.pg_constraint c
> WHERE contype = 'f' AND conparentid = 0 AND
>   (SELECT count(*) FROM pg_catalog.pg_constraint c2
>WHERE c2.conparentid = c.oid) <>
>   ((SELECT count(*) FROM pg_catalog.pg_inherits i
>WHERE (i.inhparent = c.conrelid OR i.inhparent = c.confrelid) AND
>  EXISTS (SELECT 1 FROM pg_catalog.pg_partitioned_table
>  WHERE partrelid = i.inhparent)) +
>CASE when pg_partition_root(conrelid) = confrelid THEN 1 ELSE 0 END);
> 
> This reports case 2 as OK and case 1 as bogus, as should be.  I tried
> adding more partitions and this seems to hold correctly.  I was afraid
> though that this would fail if we create an FK in an intermediate level
> of the partition hierarchy ... but experimentation doesn't seem to give
> that result.  I've run out of time today to continue to look though.

Thanks very much for this really detailed analysis and sharing your insights. 
I'll give the new query a try on Monday when I'm back at work. Do I also need 
to recheck all other databases with this new query which didn't report anything 
with the original query?

> Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
> "La vida es para el que se aventura"

You're located in the middle of the forest east of Freiburg im Breisgau in 
Germany? 🤣

Cheers,
Paul



Re: PostgreSQL 15.9 Update: Partitioned tables with foreign key constraints

2024-11-19 Thread Paul Foerster
Hi Adrian,

> On 19 Nov 2024, at 17:17, Adrian Klaver  wrote:
> 
> Read this:
> 
> https://www.postgresql.org/about/news/out-of-cycle-release-scheduled-for-november-21-2024-2958/
> 
> and hold off awhile.

Thanks very much. I will.

Cheers,
Paul



Re: PostgreSQL 15.9 Update: Partitioned tables with foreign key constraints

2024-11-19 Thread Paul Foerster
Hi Alvaro,

> On 19 Nov 2024, at 17:34, Alvaro Herrera  wrote:
> 
> It doesn't really matter when you do it, because the constraint only
> gets broken by running DETACH with the old server code.  You have
> already run the DETACH sometime in the past (that's how the constraint
> got broken), which means you would not run it again now to the same
> table.  The old server code will behave correctly when doing ADD / DROP
> constraint, as will the new server code.  Feel free to run it when it's
> more convenient to you.

Thanks.

> I'd advise against running ALTER TABLE DETACH until you have upgraded,
> however -- at least, for partitioned tables that have foreign keys
> pointing to other partitioned tables.

I have no influence on that. It's a third party application. In fact, I can't 
even do much about the applications developed inhouse because they're too big. 
That means, if there is anything built into an application, then it takes many 
moons to get it out again.

Cheers,
Paul






PostgreSQL 15.9 Update: Partitioned tables with foreign key constraints

2024-11-19 Thread Paul Foerster
Hi,

the PostgreSQL 15.9 release notes instruct to look out for especially detached 
partitions with foreign key constraints. I'm in the process of updating our 
databases from 15.8 to 15.9 now and found a case where the select statement 
returns a constraint.

The release notes say nothing about when to fix that using the generated add or 
drop statements.

Do I want/need to do that before or after I exchange the software? And if it is 
to be done on a particular of the two releases, why is that?

https://www.postgresql.org/docs/15/release-15-9.html
Section E.1.2, changelog entry 5.

Any insight would be highly appreciated. Thanks in advance.

Cheers
Paul



Re: PostgreSQL 15.9 Update: Partitioned tables with foreign key constraints

2024-11-19 Thread Paul Foerster
Hi Tom,

> On 19 Nov 2024, at 17:25, Tom Lane  wrote:
> 
> Generally speaking, our release notes are addressed to someone who's
> already installed the update (or a later one).

Thank you for the advice.

Cheers,
Paul




Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-26 Thread Paul Foerster
Hi Adrian,

> On 26 Nov 2024, at 17:56, Adrian Klaver  wrote:
> 
> Did you commit the statements?

Yes. I have autocommit on, the psql default.

> Are you using concurrent sessions to do this?

No. I do this in one session. 1. select, 2. drop, 3. add, 4. select.

> When you run the query again do you get the same two statements?

Yes. I can repeat the above 4 steps as much as I want. The result remains the 
same. I would have expected to have an empty result doing the final repeated 
select, but it shows exactly the same output.

Cheers,
Paul





Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-26 Thread Paul Foerster
Hi Tom,

> On 26 Nov 2024, at 22:25, Tom Lane  wrote:
> 
> I would have expected an empty result too.  Can you confirm that
> p_ci_pipelines used to be a partition of something?  Can you show us
> the full DDL (or psql \d+ output) for the partitioned table it
> used to be part of, and for that matter also for p_ci_pipelines?
> Did the FK used to reference the whole partitioned table, or just
> this partition?
> 
> I'm suspicious that our repair recipe might not have accounted
> for self-reference FKs fully, but that's just a gut feeling at
> this point.

Of course, it contains no secret data. Please find the full log below. 
According to the add constraint statement, it is a self reference.

Thanks for looking into it.

Cheers,
Paul


gitxp1t=# \set
AUTOCOMMIT = 'on'
...
VERSION = 'PostgreSQL 15.10 on x86_64-pc-linux-gnu, compiled by gcc (SUSE 
Linux) 7.5.0, 64-bit'
...



gitxp1t=# SELECT conrelid::pg_catalog.regclass AS "constrained table",
gitxp1t-#conname AS constraint,
gitxp1t-#confrelid::pg_catalog.regclass AS "references",
gitxp1t-#pg_catalog.format('ALTER TABLE %s DROP CONSTRAINT %I;',
gitxp1t(#  conrelid::pg_catalog.regclass, conname) AS 
"drop",
gitxp1t-#pg_catalog.format('ALTER TABLE %s ADD CONSTRAINT %I %s;',
gitxp1t(#  conrelid::pg_catalog.regclass, conname,
gitxp1t(#  pg_catalog.pg_get_constraintdef(oid)) AS 
"add"
gitxp1t-# FROM pg_catalog.pg_constraint c
gitxp1t-# WHERE contype = 'f' AND conparentid = 0 AND
gitxp1t-#(SELECT count(*) FROM pg_catalog.pg_constraint c2
gitxp1t(# WHERE c2.conparentid = c.oid) <>
gitxp1t-#(SELECT count(*) FROM pg_catalog.pg_inherits i
gitxp1t(# WHERE (i.inhparent = c.conrelid OR i.inhparent = c.confrelid) AND
gitxp1t(#   EXISTS (SELECT 1 FROM pg_catalog.pg_partitioned_table
gitxp1t(#   WHERE partrelid = i.inhparent));
constrained table |   constraint|   references   |  
  drop |
 add
  
---+-++-+--
p_ci_pipelines| fk_262d4c2d19_p | p_ci_pipelines | ALTER TABLE 
p_ci_pipelines DROP CONSTRAINT fk_262d4c2d19_p; | ALTER TABLE p_ci_pipelines 
ADD CONSTRAINT fk_262d4c2d19_p FOREIGN KEY (auto_canceled_by_partition_id, 
auto_canceled_by_id) REFERENCES p_ci_pipelines(partition_id, id) ON UPDATE 
CASCADE ON DELETE SET NULL;
(1 row)
 gitxp1t=# ALTER TABLE p_ci_pipelines DROP CONSTRAINT fk_262d4c2d19_p;
ALTER TABLE
gitxp1t=# ALTER TABLE p_ci_pipelines ADD CONSTRAINT fk_262d4c2d19_p FOREIGN KEY 
(auto_canceled_by_partition_id, auto_canceled_by_id) REFERENCES 
p_ci_pipelines(partition_id, id) ON UPDATE CASCADE ON DELETE SET NULL;
ALTER TABLE
gitxp1t=# SELECT conrelid::pg_catalog.regclass AS "constrained table",
gitxp1t-#conname AS constraint,
gitxp1t-#confrelid::pg_catalog.regclass AS "references",
gitxp1t-#pg_catalog.format('ALTER TABLE %s DROP CONSTRAINT %I;',
gitxp1t(#  conrelid::pg_catalog.regclass, conname) AS 
"drop",
gitxp1t-#pg_catalog.format('ALTER TABLE %s ADD CONSTRAINT %I %s;',
gitxp1t(#  conrelid::pg_catalog.regclass, conname,
gitxp1t(#  pg_catalog.pg_get_constraintdef(oid)) AS 
"add"
gitxp1t-# FROM pg_catalog.pg_constraint c
gitxp1t-# WHERE contype = 'f' AND conparentid = 0 AND
gitxp1t-#(SELECT count(*) FROM pg_catalog.pg_constraint c2
gitxp1t(# WHERE c2.conparentid = c.oid) <>
gitxp1t-#(SELECT count(*) FROM pg_catalog.pg_inherits i
gitxp1t(# WHERE (i.inhparent = c.conrelid OR i.inhparent = c.confrelid) AND
gitxp1t(#   EXISTS (SELECT 1 FROM pg_catalog.pg_partitioned_table
gitxp1t(#   WHERE partrelid = i.inhparent));
constrained table |   constraint|   references   |  
  drop |
 add
  
---+-++-+--
 p_ci_pipelines| fk_262d4c2d19_p | p_ci_pipelines | ALTER TABLE 
p_ci_pipelines DROP CONSTRAINT fk_262d4c2d19_p; | ALTER TABLE p_ci_pipelines 
ADD CONSTRAINT fk_262d4c2d19_p FOREIGN KEY (auto_canceled_by_partition_id, 
auto

Re: PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-30 Thread Paul Foerster
Hi Alvaro,

> On 30 Nov 2024, at 08:41, Alvaro Herrera  wrote:
> 
> Only if you have self-referencing FKs in partitioned tables.  It
> would be an interesting data point to verify whether this reports
> anything else.  Also, I'd be really curious if your databases include
> the case I'm suspicious about: a multi-level hierarchy containing an FK
> that points to an intermediate level of itself.

The instance I reported was the only one in our several hundred databases. So I 
guess this is really a corner case. As mentioned I'll try on Monday.

> I'm within fives minutes of longitude and latitude of that location, yes
> :-) I didn't want to give unnecessary precision there, but is somebody
> wants to chat sometime or whatever is welcome to ping me.

Then you're not really far way. I'm located in the Solothurn, Switzerland area 
which is only less than 90 km away. Drop me a line if you ever make it to 
Solothurn. 🤣

Cheers,
Paul





Re: DB Switchover using repmgr--Error

2024-11-26 Thread Paul Foerster
Hi,

> [postgres@post1 bin]$ ./repmgr -f /var/lib/pgsql/repmgr.conf primary register
> ERROR: following errors were found in the configuration file:
>   syntax error in file "/var/lib/pgsql/repmgr.conf" line 3, near token 
> "data_directory"
>   syntax error in file "/var/lib/pgsql/repmgr.conf" line 6, near token 
> "log_file"
> [postgres@post1 bin]$
> 
> Conf file:
> 
> [postgres@post1 data]$ cat /var/lib/pgsql/repmgr.conf
> node_id=1
> node_name=primary
> conninfo='host=192.168.29.193 user=repmgr dbname=repmgr connect_timeout=2' 
> data_directory='/application/pgsql/data' failover=automatic
> promote_command='/usr/pgsql-16/bin/repmgr standby promote -f 
> /var/lib/pgsql/repmgr.conf --log-to-file'
> follow_command='/usr/pgsql-16/bin/repmgr standby follow -f 
> /var/lib/pgsql/repmgr.conf --log-to-file --upstream-node-id=%n'
> pg_bindir='/usr/pgsql-16/bin' log_file='/usr/pgsql-16/repmgr.log'
> [postgres@post1 data]$

I'm not a repmgr guru but at first glance I would say that your config lacks 
two line breaks. It should probably look like this:

node_id=1
node_name=primary
conninfo='host=192.168.29.193 user=repmgr dbname=repmgr connect_timeout=2'
data_directory='/application/pgsql/data'
failover=automatic
promote_command='/usr/pgsql-16/bin/repmgr standby promote -f 
/var/lib/pgsql/repmgr.conf --log-to-file'
follow_command='/usr/pgsql-16/bin/repmgr standby follow -f 
/var/lib/pgsql/repmgr.conf --log-to-file --upstream-node-id=%n'
pg_bindir='/usr/pgsql-16/bin' log_file='/usr/pgsql-16/repmgr.log'

Cheers
Paul



PostgreSQL 15.10 update corrective action for ATTACH PARTITION/DETACH PARTITION

2024-11-26 Thread Paul Foerster
Hi,

I have a question regarding the recent security update for PostgreSQL 15.

We have a gitlab database. It used to run on the PostgreSQL 15.8 software. I 
updated from 15.8 to 15.10 and executed the corrective actions as outlined in:

https://www.postgresql.org/about/news/postgresql-171-165-159-1414-1317-and-1221-released-2955/

I executed "SELECT conrelid::pg_catalog.regclass AS "constrained table", 
conname AS constraint, confrelid::pg_catalog.regclass AS "references", 
pg_catalog.format('ALTER TABLE %s DROP CONSTRAINT %I;', 
conrelid::pg_catalog.regclass, conname) AS "drop", pg_catalog.format('ALTER 
TABLE %s ADD CONSTRAINT %I %s;', conrelid::pg_catalog.regclass, conname, 
pg_catalog.pg_get_constraintdef(oid)) AS "add" FROM pg_catalog.pg_constraint c 
WHERE contype = 'f' AND conparentid = 0 AND (SELECT count(*) FROM 
pg_catalog.pg_constraint c2 WHERE c2.conparentid = c.oid) <> (SELECT count(*) 
FROM pg_catalog.pg_inherits i WHERE (i.inhparent = c.conrelid OR i.inhparent = 
c.confrelid) AND EXISTS (SELECT 1 FROM pg_catalog.pg_partitioned_table WHERE 
partrelid = i.inhparent));" which gave the result below:


-[ RECORD 1 
]-+-
constrained table | p_ci_pipelines
constraint| fk_262d4c2d19_p
references| p_ci_pipelines
drop  | alter table p_ci_pipelines drop constraint fk_262d4c2d19_p;
add   | alter table p_ci_pipelines add constraint fk_262d4c2d19_p 
FOREIGN KEY (auto_canceled_by_partition_id, auto_canceled_by_id) REFERENCES 
p_ci_pipelines(partition_id, id) ON UPDATE CASCADE ON DELETE SET NULL;

I then executed the two alter table statements without any problem. No error 
was reported and all seems ok.

Now, if I execute the query to find the constraints again, I would expect the 
result to be empty. But it is not.

Why is that and what am I supposed to do? Is the problem fixed now or is it 
still pending? Any ideas would be greatly appreciated.

Cheers
Paul



Re: libc to libicu via pg_dump/pg_restore?

2025-02-06 Thread Paul Foerster
Hi Guillaume,

> On 6 Feb 2025, at 11:13, Guillaume Lelarge  
> wrote:
> 
> You probably don't need --disable-triggers. You should fix errors in the 
> order they appear. The first one is on the drop of the database:
> 
> ERROR:  cannot drop the currently open database
> 
> pg_restore can't drop the database because it's connected to the database. 
> When you use -c and -C options, you can't connect to the database you want to 
> restore to. You have to connect to another database, such as postgres, so 
> that it can do the drop and the create. After both are done, it will connect 
> to the just-created database to do the restore step.
> 
> Look at the pg_restore man page 
> (https://www.postgresql.org/docs/current/app-pgrestore.html). It says on the 
> --create option:
> 
> When this option is used, the database named with -d is used only to issue 
> the initial DROP DATABASE and CREATE DATABASE commands. All data is restored 
> into the database name that appears in the archive.

This is intended because the dump contains a create database statement which 
creates the database with libc which is exactly what I do NOT want. I want it 
to be a libicu database. So I pre-create it as such and inhibit recreation by 
pg_restore by sitting on it with a session. So the first message about the 
database not being created is expected and can be ignored. This works fine for 
all databases so far.

My problem is the constraint violation which inhibits the foreign key 
contraints from being created.

Everything works for all databases. Only this one has that problem. And since I 
disabled triggers during restore, that shouldn't be a problem either.

Btw., the parent table contains the rows in question. So they are imported. I 
just can't make out why there is a problem.

Cheers,
Paul



libc to libicu via pg_dump/pg_restore?

2025-02-06 Thread Paul Foerster
Hi,

I have a problem which I don't understand. I have and do:


instance a, libc based, PostgreSQL 15.10:

mydb=# \l mydb
  List of databases
 Name |  Owner  | Encoding |   Collate   |Ctype| ICU Locale | Locale 
Provider | Access privileges
--+-+--+-+-++-+---
 mydb | my_user | UTF8 | en_US.UTF-8 | en_US.UTF-8 || libc  
  |

$ pg_dump -Fc -Z1 -b mydb -f mydb.dump.gz
$ ls -l mydb.dump.gz
-rw--- 1 postgres postgres 14660308577 Feb  6 08:45 mydb.dump.gz


instance b, libicu based, PostgreSQL 17.2:
$ psql postgres

# create database mydb;
# \l mydb
  List of databases
 Name |   Owner  | Encoding | Locale Provider |   Collate   |Ctype| 
Locale | ICU Rules | Access privileges
--+--+--+-+-+-++---+---
 mydb | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | 
en-US  |   |

$ pg_restore -cC --if-exists --disable-triggers -d mydb mydb.dump.gz
pg_restore: error: could not execute query: ERROR:  cannot drop the currently 
open database
Command was: DROP DATABASE IF EXISTS mydb;
pg_restore: error: could not execute query: ERROR:  database "mydb" already 
exists
Command was: CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LOCALE_PROVIDER = libc LOCALE = 'en_US.UTF-8';


pg_restore: error: could not execute query: ERROR:  insert or update on table 
"table_1" violates foreign key constraint "..._fk"
DETAIL:  Key (dokument_id)=(133680) is not present in table "...".
Command was: ALTER TABLE ONLY myschema.table
   ADD CONSTRAINT table_fk FOREIGN KEY (dokument_id) REFERENCES 
myschema.dokument(id);


pg_restore: error: could not execute query: ERROR:  insert or update on table 
"table_2" violates foreign key constraint "..._fk"
DETAIL:  Key (dokument_id)=(133740) is not present in table "dokument".
Command was: ALTER TABLE ONLY vostra2_str.nen_dokument
   ADD CONSTRAINT table_fk FOREIGN KEY (dokument_id) REFERENCES 
myschema.dokument(id);


I'm sorry, I sort of had to anonymize object names. But you should be able to 
get the gist of it. It's a dreaded message when importing. My goal is to export 
libc PostgreSQL 15 databases and import them into PostgreSQL 17 as libicu based 
databases to get away from glibc based sorting. I searched the net to find the 
"--disable-triggers" disable triggers when running pg_restore but the errors 
still occur.

What am I doing wrong or how can I better achieve that? Any help would be 
appreciated.

Thanks in advance.

Paul



Re: libc to libicu via pg_dump/pg_restore?

2025-02-08 Thread Paul Foerster
Hi Daniel,

> On 7 Feb 2025, at 14:29, Daniel Verite  wrote:
> 
> Still, you may check it with pg_amcheck [1] or try rebuilding it
> just in case.

Thanks. I guess this is good advice. I will try that on Monday.

Cheers,
Paul




Re: libc to libicu via pg_dump/pg_restore?

2025-02-06 Thread Paul Foerster
Hi Guillaume,

> On 6 Feb 2025, at 15:51, Guillaume Lelarge  
> wrote:
> 
> You're right. Now I see the "create database" query in your previous email. I 
> should have been more careful, sorry for the noise.

No problem.

> Well, the doc says that --disable-triggers is only relevant for data-only 
> restore, which is not your use case. So you don't need it and it won't help 
> you.

Yes, I found that out too. But it doesn't hurt. 🤣

> Me neither. But another comment. You create the database, so there should be 
> no objects in it. Why do you use the -c, -C, and --if-exists options? Try 
> without them. On a new database, you should only need:
> 
> pg_restore -d mydb mydb.dump.gz

I need -C because I need ACLs to be recreated too. I tried with -C only, i.e. 
no -c but that doesn't work for some reason. The --if-exists is a script 
remnant of my past tries to suppress some messages. I'll try removing that as I 
rewrote my create database script which runs before importing.

Cheers,
Paul



Re: libc to libicu via pg_dump/pg_restore?

2025-02-06 Thread Paul Foerster
Hi Adrian,

> On 6 Feb 2025, at 17:31, Adrian Klaver  wrote:
> 
> 1) Log into postgres database and do:
> 
> a) DROP DATABASE mydb;
> b) CREATE DATABASE mydb ;
> 
> 2) pg_restore -d mydb mydb.dump.gz

With create database  being "template template0", this is what my 
script does. But I need the -cC options for pg_restore to get ACLs back. 
Leaving out either one of them will not get me the ACLs back.

>> pg_restore: error: could not execute query: ERROR:  insert or update on 
>> table "table_1" violates foreign key constraint "..._fk"
>> DETAIL:  Key (dokument_id)=(133680) is not present in table "...".
> 
> Is dokument_id an integer field?

Yes, it's a bigint.

> In a follow post you said:
> 
> "Everything works for all databases. Only this one has that problem."
> 
> Do you mean you made the same libc --> icu change on the other databases with 
> no errors?

Yes, I have that PostgreSQL 15.10 cluster with locale provider libc with about 
40 databases. I initdb'ed a new PostgreSQL 17.2 cluster with icu as locale 
provider and did a "create database ... template template0" for all about 40 
databases. Then I did the mentioned pg_restore for each of them as a parallel 
background job in the shell.

The whole database cluster is about 1.2 TB in size so I have to find ways to 
restore as many databases in parallel as possible. However pg_restore only 
fails on this single database. All others in that database cluster work fine.

Cheers
Paul





Re: libc to libicu via pg_dump/pg_restore?

2025-02-06 Thread Paul Foerster
Hi Adrian,

> On 6 Feb 2025, at 19:44, Adrian Klaver  wrote:
> 
> By ACL do you mean roles?
> 
> If so roles are global to the cluster not the database, so I am not seeing -C 
> being relevant.
> 
> If not you will need to be more specific about what you are referring to.

I did a "pg_dumpall -r >roles.sql" on the originale database cluster and "psql 
-f roles.sql" on the new database cluster. So, roles are pre-created as is 
necessary.

No, I mean ACLs, like in "Access privileges" when doing a "\l".

Cheers,
Paul



Re: libc to libicu via pg_dump/pg_restore?

2025-02-13 Thread Paul Foerster
Hi Adrian,

sorry for the late answer. I'm just too busy.

> On 7 Feb 2025, at 17:19, Adrian Klaver  wrote:
> 
>> With create database  being "template template0", this is what my 
>> script does. But I need the -cC options for pg_restore to get ACLs back. 
>> Leaving out either one of them will not get me the ACLs back.
> 
> That does not make sense.
> 
> Are there ACLs(privileges) in the database at all?
> 
> What is the pg_dump command you are running?

I use this pg_dump command:

pg_dump -Fc -Z1 --quote-all-identifiers -b "${PGDATABASE}" -f 
${dumpBase}/${clusterName}.${PGDATABASE}.dump.gz >${PGDATABASE}.out 
2>${PGDATABASE}.err &

The command is embedded in a Bash script for loop that loops PGDATABASE over 
all database names inside the cluster and launches pg_dump as a background job. 
It then waits for all jobs to complete ("wait" command). dumpBase is just the 
destination directory.

If I don't use -cC, i.e. both, then the Access privileges will not be restored. 
Checking with \l just shows an empty field as usual for a newly created 
database. This happens at least with 17.2. I didn't check that with 17.3 yet.

I agree, from how I understood the docs I should be able to only use -C and not 
-c.

As for the data inconsistency, PostgreSQL is right. I found out that some 
clever person did a "alter table … disable trigger all" on a table and then 
manipulated data. That broke referential integrity. So, this case is closed.

Cheers,
Paul



Re: ICU Collations and Collation Updates

2025-04-14 Thread Paul Foerster
Hi Tom, hi Laurenz

> On 14 Apr 2025, at 16:36, Tom Lane  wrote:
> 
> Laurenz Albe  writes:
>> You would have to build PostgreSQL yourself with a fixed version of ICU
>> that you never upgrade if you want to avoid the problem.
[...]
> 2. It's at least *possible* to use your own fixed-version ICU
> library if you're desperate enough.  I don't think that would work
> too well for libc; you're stuck with what the platform provides.

That topic is interesting because I have a huge problem finding a downtime 
window for our applications to rebuild after the SLES upgrades. I am in the 
process of slowly changing everything to ICU. But limiting downtime is 
essential for me.

We always build the PostgreSQL software from source, so if there's a way to 
bake the libicu directly into the software to never change it again (beside 
from recompiling of course), even when building new PostgreSQL versions, I'd 
very much appreciate if if you could let me know how I would do that.

The necessity for reindex is a huge problem for us.

Cheers,
Paul



Re: ICU Collations and Collation Updates

2025-04-15 Thread Paul Foerster
Hi Laurenz,

> On 14 Apr 2025, at 19:36, Laurenz Albe  wrote:
> 
> You cannot "bake in into" PostgreSQL, but you can grab the ICU source,
> install it in /usr/local or similar and build PostgreSQL against that.
> You will have to fiddle with CFLAGS and LDFLAGS so that the build process
> uses the C headers and libraries from your build of the ICU library.

I think I'm going to look into this. I'm not a developer, so I'll probably 
fail. Building PostgreSQL from source is easy and well documented. I have 
scripted this. But other than that, I'll probably fail. 🤣 Still worth a try, 
though.

Thanks,
Paul



Moving from Linux to Linux?

2025-03-11 Thread Paul Foerster
Hi,

we are considering changing the PostgreSQL platform from SUSE SLE to Red Hat. 
To keep service interruptions as short as possible, the idea is to set up a 
streaming replication from the SUSE server to be replaced to a temporary Red 
Hat server and then replace that SUSE server with the newly setup Red Hat 
server.

My idea is to set up a streaming replication for this. But this of course only 
works if the data files would be binary compatible.

So, I wonder, if this is possible. We have a lot of databases, some of them 
need to be highly available and some are large too.

Are there any obstacles that definitely make that a no-go? Do I risk 
corruption? It's both Linux, just a different distribution.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-11 Thread Paul Foerster
Hi Devrim, Thomas, Adrian, Ron, Joe,

answering to myself as answering to five postings in one go is impossible. 🤣

> Are there any obstacles that definitely make that a no-go? Do I risk 
> corruption? It's both Linux, just a different distribution.

The question was a bit of an idea. So the glibc version in not known yet, but 
I'm highly confident that they will differ. A reindex could in theory be 
possible in most cases, but is a definite show stopper on some of our 
databases, because it would mean too much application downtime.

So, it's either logical replication or close to impossible.

Thanks very much for your input.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-12 Thread Paul Foerster
Hi Ron,

> On 11 Mar 2025, at 20:34, Ron Johnson  wrote:
> 
> If you don't do much DDL, the LR should be quite workable.

DDL during logical replication unfortunately is a show-stopper.

Cheers,
Paul




Re: Moving from Linux to Linux?

2025-03-12 Thread Paul Foerster
Hi Greg,

> On 12 Mar 2025, at 21:31, Greg Sabino Mullane  wrote:
> 
> Keep in mind that you only need to reindex text-based indexes. Ints are still 
> gonna int. So it might not be too bad.

Yes, I know, but unfortunately the worst case index of them all will still take 
a few hours.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-13 Thread Paul Foerster
Hi Ron,

> On 12 Mar 2025, at 17:59, Ron Johnson  wrote:
> 
> Developers making DDL changes on production databases?

Of course not. But I can't block developer databases. That'd make a few hundred 
developers happy.

> Or are there prod and dev databases on the same instance?  If so, then know 
> that you don't have to logically replicate the whole instance.

Also of course not. There is development, pre-production and production.

Outages on development databases make a few hundred developers happy, while 
outages of production databases are appreciated by up to almost 40K users, 
depending on the application.

Anyway, this is our concern. In our environment, logical replication is 
impossible for development databases, hard for pre-production because of 
automatic deployments and only possible on production databases.

Anyway, this is going off-topic now.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-12 Thread Paul Foerster
Hi Christophe,

> On 12 Mar 2025, at 12:16, Christophe Pettus  wrote:
> 
> You *can* apply DDL while logical replication is going on, as long as you do 
> so in a disciplined way.  This generally means applying it to the subscriber 
> before you apply it to the publisher, and making sure that any columns added 
> to a table are either NULLable or have defaults.

Yes, I know, but this is a non issue in real life with dozens of databases per 
instance and a few hundred developers doing their work.

So, logical replication online for me is more of an academic case study. In 
real life I need downtime.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-12 Thread Paul Foerster
Hi Adrian,

> On 12 Mar 2025, at 21:26, Adrian Klaver  wrote:
> 
> A good summary of the glibc issue:
> 
> https://wiki.postgresql.org/wiki/Locale_data_changes
> 
> With distro information:
> 
> https://wiki.postgresql.org/wiki/Locale_data_changes#What_Linux_distributions_are_affected

I know the article, thanks. We needed a full reindex when upgrading from SLES 
15.2 to 15.3. We're on 15.5 now, partly already on 15.6. Thank god, we didn't 
have that ugly database back then that we have now. But I see the next doomsday 
coming on the horizon. 🤣

Also, we try to get rid of the libc locale provider and switch to libicu, 
wherever possible.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-12 Thread Paul Foerster
Hi Ron,

> On 12 Mar 2025, at 21:50, Ron Johnson  wrote:
> 
> No, I think it's 100% on point: logically replicate the Prod databases, while 
> pg_dump/pg_restore of the dev and pre-prod databases happen on weekends.

Yes, I live for and in the company. 🤣 SNCR.

No, seriously, I'm one of only two PostgreSQL DBAs and our working capacity is 
limited by biological and other factors. Next tob working a normal day, 
capacities to to frequent things like these are limited.

Cheers,
Paul



Re: Moving from Linux to Linux?

2025-03-13 Thread Paul Foerster
Hi Adrian,

> On 12 Mar 2025, at 22:37, Adrian Klaver  wrote:
> 
> What version of RH are you migrating to?

That'd be currently Red Hat 9 with glibc 2.34.

Cheers,
Paul




Re: Moving from Linux to Linux?

2025-03-13 Thread Paul Foerster
Hi Joe,

> On 13 Mar 2025, at 03:55, Joe Conway  wrote:
> 
> If you are desperate you could start with 
> https://github.com/awslabs/compat-collation-for-glibc and create a new branch 
> for your current version of SLES/glibc and deploy the resulting rpm to RHEL. 
> At least in theory. FWIW I was able to get the glibc locale compatibility 
> library for AL2 (glibc 2.26) to work fine on Linux Mint system (glibc 2.34).

I'm not desperate. Moving from SUSE to Red Hat is an option to make especially 
building PostGIS easier to handle. We were advised this way by a consultant, 
who also told us that he had never seen anyone actually running PostGIS on 
SUSE. We need to build PostGIS from source because we have to meet some special 
requirements. SUSE is usually hopelessly out of date as far as dependency 
packages are concerned. So we have to build them too from source (cgal, SFCGAL, 
gdal, proj, … you name it). The idea is that Red Hat makes those things easier 
to handle because their repository is more current than SUSE's.

> For more on the compatibility library you could watch a presentation[1] 
> starting here:
>  https://youtu.be/0E6O-V8Jato?t=1749

I'm going to watch this one later. Thanks very much.

> The other option, which may be equally untenable, is to upgrade in-place to 
> pg17 and convert everything to use the new built-in collation provider. That 
> ought to be portable across different versions of Linux.

We have now on PostgreSQL 15.12:

postgres=# select version();
  version
-
 PostgreSQL 15.12 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 7.5.0, 
64-bit
(1 row)

postgres=# \l postgres
List of databases
   Name   |  Owner   | Encoding |   Collate   |Ctype| ICU Locale | 
Locale Provider |   Access privileges
--+--+--+-+-++-+---
 postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 || libc 
   | postgres=CTc/postgres
(1 row)

And we are going to PostgreSQL 17.4:

postgres=# select version();
  version

 PostgreSQL 17.4 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 7.5.0, 
64-bit
(1 row)

postgres=# \l postgres
   List of databases
   Name   |  Owner   | Encoding | Locale Provider |   Collate   |Ctype| 
Locale | ICU Rules | Access privileges
--+--+--+-+-+-++---+---
 postgres | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | 
en-US  |   |  (1 row)

Is C.UTF8 really the same as en_US.UTF8? I ask because though we use 
en_US.UTF8, we are located in Switzerland and using non English characters is 
not exactly the exception. We have characters from all over the world in our 
databases. There must be no sorting differences between en_US.UTF8 and C.UTF8. 
Otherwise we will run into trouble with unhappy customers. So, C.UTF8 would 
only be an option if the collation would be identical.

> The problem you might find with libicu is that different versions of ICU can 
> have the same issues as different versions of glibc, and you might not have 
> the same ICU version available on SLES and RHEL.

Yes, I know. As far as I have been told, libicu is far less prone to major 
collation changes than glibc is. Also, libicu offers the possibility to pin a 
version for a certain time. Our sysadmins will naturally not be able to pin a 
glibc version without wrecking an inevitable server upgrade.

> If you want to explore the compatibility library approach contact me off list 
> and I will try to get you started. It has been a couple of years since I 
> touched it, but when I did it took me a couple of days to get from the AL2 
> (glibc 2.26) branch (which was done first) to the RHEL 7 (glibc 2.17) branch.

I just took a quick glance. I don't have a Github account (and also don't want 
one 🤣). I can do a git clone, but that's basically all I know. Also, right now, 
I'm just exploring possibilities. As far as I understand the readme on Github, 
this will replace the glibc on Red Had with one with adapted collation rules? 
If this is the case, then our admins will definitely say no to this.

> [1] https://www.joeconway.com/presentations/2025-PGConf.IN-glibc.pdf

This is a really good one. Thanks very much for this.

Cheers,
Paul





Re: libc to libicu via pg_dump/pg_restore?

2025-02-13 Thread Paul Foerster
Hi Adrian,

> On 13 Feb 2025, at 17:40, Adrian Klaver  wrote:
> 
> Per:
> 
> https://www.postgresql.org/docs/current/ddl-priv.html
> 
> "If the “Access privileges” column is empty for a given object, it means the 
> object has default privileges (that is, its privileges entry in the relevant 
> system catalog is null). Default privileges always include all privileges for 
> the owner, and can include some privileges for PUBLIC depending on the object 
> type, as explained above. The first GRANT or REVOKE on an object will 
> instantiate the default privileges (producing, for example, 
> miriam=arwdDxt/miriam) and then modify them per the specified request. 
> Similarly, entries are shown in “Column privileges” only for columns with 
> nondefault privileges. (Note: for this purpose, “default privileges” always 
> means the built-in default privileges for the object's type. An object whose 
> privileges have been affected by an ALTER DEFAULT PRIVILEGES command will 
> always be shown with an explicit privilege entry that includes the effects of 
> the ALTER.)"
> 
> From this:
> 
> 1) It not unusual for the field to be blank.
> 
> 2) \l only lists the privileges for the database object itself, not any of 
> it's contained objects.
> 
> In the original database are you executing explicit GRANTs on the database 
> object?
> 
> Do:
> 
> pg_restore -s -f db_name.sql ${PGDATABASE}.out
> 
> This will create a text version restore of the schema objects in the dump 
> file. Then search the file for GRANT statements.

${PGDATABASE}.out and ${PGDATABASE}.err are just log files of the backgrounded 
pg_dump command. The .out file is empty anyway and the .err file only contains 
the messages about the database being not be able to be dropped (-c) because I 
sit on it, which is explainable and correct.

What I mean is, in our environment there are four (application) roles having 
certain privileges, one of them being the database owner while the others have 
certain rights like reading or manipulating data, but no DDL. These four roles 
all have their privileges shown with \l in the access privileges column. 
Contrary to how I understand the documentation of pg_restore, they are restored 
only if I use -cC and they are not restored if I only use -C.

Cheers,
Paul



Re: libc to libicu via pg_dump/pg_restore?

2025-02-13 Thread Paul Foerster
Hi Adrian,

> On 13 Feb 2025, at 19:05, Adrian Klaver  wrote:
> 
> Then run pg_restore -s -f db_name.sql against whatever is the dump file 
> produced by pg_dump -Fc -Z1 ...
> 
> It will create a plain text version of the schema definitions, no data in the 
> file db_name.sql. Then you can see if GRANTs are being done.

I think, we're not talking about the same thing. I'm talking about access 
privileges on the database, i.e. connect, create, etc. Without a connect 
privilege, no schema privileges are relevant in the first place.

> This only shows the information the actual database object not the objects 
> contained within it.

Yes, this is what I am referring to, the access privileges on the database, not 
objects.

> You will need to show your work:
> 1) What does \l show in the cluster you are dumping from?
> 2) What are the roles and what privileges are they being granted?

I'm not at work anymore and won't be until Monday (long weekend 🤣). So I don't 
have the exact case handy. However, I tried on my home database clusters (15.10 
and 17.3). Seems, at least here at home, only using -C works. I don't know 
(yet) why it does not work at work.

Here's what I tried on my own clusters. Note the access privileges for "paul".



Source DB PostgreSQL 15.10
--

postgres=# \l mydb
   List of databases
 Name |  Owner   | Encoding |   Collate   |Ctype| ICU Locale | Locale 
Provider |   Access privileges   
--+--+--+-+-++-+---
 mydb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 || libc 
   | =Tc/postgres +
  |  |  | | ||  
   | postgres=CTc/postgres+
  |  |  | | ||  
   | paul=CTc/postgres
(1 row)

$ export PGDATABASE=mydb
$ pg_dump -Fc -Z1 --quote-all-identifiers -b "${PGDATABASE}" -f 
${PGDATABASE}.dump.gz

No output, no error messages. Everything is fine.



Target DB PostgreSQL 17.3
-

postgres=# create role paul login;
CREATE ROLE
postgres=# create database mydb template template0;
CREATE DATABASE
postgres=# \l mydb
 List of databases
 Name |  Owner   | Encoding | Locale Provider |   Collate   |Ctype| 
Locale | ICU Rules | Access privileges 
--+--+--+-+-+-++---+---
 mydb | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | 
en-US  |   | 
(1 row)

$ pg_restore -C -d mydb mydb.dump.gz 
pg_restore: error: could not execute query: ERROR:  database "mydb" already 
exists
Command was: CREATE DATABASE "mydb" WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LOCALE_PROVIDER = libc LOCALE = 'en_US.UTF-8';


pg_restore: warning: errors ignored on restore: 1

postgres=# \l mydb
   List of databases
 Name |  Owner   | Encoding | Locale Provider |   Collate   |Ctype| 
Locale | ICU Rules |   Access privileges   
--+--+--+-+-+-++---+---
 mydb | postgres | UTF8 | icu | en_US.UTF-8 | en_US.UTF-8 | 
en-US  |   | =Tc/postgres +
  |  |  | | | | 
   |   | postgres=CTc/postgres+
  |  |  | | | | 
   |   | paul=CTc/postgres
(1 row)

So, "paul" again has CTc after pg_restore. That's what does not work at work. 
I'll have to figure out what's wrong there.

Cheers,
Paul



Updating to PostgreSQL 17.5

2025-05-11 Thread Paul Foerster
Hi,

the release notes for PostgreSQL 17.5 
(https://www.postgresql.org/docs/17/release-17-5.html) state:

"Also, if you have any BRIN bloom indexes, it may be advisable to reindex them 
after updating."

I don't know what exactly that means. So I read about BRIN and BLOOM indexes 
and learned how to create them using the "USING BRIN..." or "USING BLOOM..." 
clause. But there is no such thing as "USING BRIN BLOOM" or other variation. 
After quite some research, I only found that there may be a BRIN and a BLOOM 
index on the same table, created by two individual CREATE INDEX commands. As I 
understand, those are the ones referred to in the release notes.

Also, the \di+ command wasn't much of a help in this case. So I came up with 
the following query which also creates the necessary reindex commands. Please 
let me know if my query below hits the wanted indexes or not.

I'd be really cool if release notes contained the necessary queries to find the 
objects in question.

Thanks in advance,
Paul


with
brin_indexes as (
select
e.nspname as schema_name,
c.relname as table_name,
b.relname as index_name
from
pg_catalog.pg_index as a
join pg_catalog.pg_class as b on b.oid = a.indexrelid
join pg_catalog.pg_class as c on c.oid = a.indrelid
join pg_catalog.pg_am as d on b.relam = d.oid
join pg_catalog.pg_namespace as e on e.oid = c.relnamespace
where
d.amname = 'brin'
),
bloom_indexes as (
select
e.nspname as schema_name,
c.relname as table_name,
b.relname as index_name
from
pg_catalog.pg_index as a
join pg_catalog.pg_class as b on b.oid = a.indexrelid
join pg_catalog.pg_class as c on c.oid = a.indrelid
join pg_catalog.pg_am as d on b.relam = d.oid
join pg_catalog.pg_namespace as e on e.oid = c.relnamespace
where
d.amname = 'bloom'
)
select
concat ('reindex index "', schema_name, '"."', index_name, '"; -- brin 
index') as reindex_cmd
from
brin_indexes
where
schema_name in (select schema_name from bloom_indexes) and
table_name in (select table_name from bloom_indexes)
union all
select
concat ('reindex index "', schema_name, '"."', index_name, '"; -- bloom 
index') as reindex_cmd
from
bloom_indexes
where
schema_name in (select schema_name from brin_indexes) and
table_name in (select table_name from brin_indexes)
order by
reindex_cmd;



Re: Updating to PostgreSQL 17.5

2025-05-11 Thread Paul Foerster
Hi Tom,

> On 11 May 2025, at 21:36, Tom Lane  wrote:
> 
> Sorry --- it means BRIN indexes that use one of the "xxx_bloom_ops" opclasses.
> 
> https://www.postgresql.org/docs/current/brin.html#BRIN-BUILTIN-OPCLASSES

Ah, so that'd reduce my monster query to:

select
concat ('reindex index "', schemaname, '"."', indexname, '";') as 
reindex_cmd
from
pg_indexes
where
indexdef ~ '_bloom_ops';

Thanks very much.

Paul