Re: Strange behaviors with ranges

2024-08-28 Thread Laurenz Albe
On Tue, 2024-08-27 at 19:29 +0200, Jean-Christophe Boggio wrote:
> I have 2 very confusing behaviors when using ranges.
> 
> It all started with this query:
> 
>  WITH rangespaliers AS (
>  SELECT numrange( qtep1+1   , qtep2,  '[]') AS rangep FROM 
> paliers JOIN tmp_limitcontrats USING(idcontrat)
> --    SELECT numrange( qtep1+1   , qtep2,  '[]') AS rangep FROM 
> paliers WHERE idcontrat=1003
>      )
>      ,rangespaliers2 AS (
>      select *
>      FROM rangespaliers
>      WHERE rangep <> NUMRANGE(NULL, NULL) -- ERROR IS HERE
>      )
>      select * from rangespaliers2;
> 
> When I run this query, I get the error "Range lower bound must be less 
> than or equal to range upper bound".
> 
> (a) If I comment out the line marked "ERROR IS HERE", I don't have an 
> error (but I'm missing the filter of course).
> 
> (b) Also, if I uncomment line 3 and comment out line 2, I get the 
> correct behavior. Very strange thing is that tmp_limitcontrats has only 
> one row which contains "idcontrat=1003".
> 
> Now, in that table "paliers", the line for idcontrat=1003 has value NULL 
> for both qtep1 and qtep2. So the final behavior should be an empty 
> result set.

The explanation is in the execution plans.

With your sample data, the plan looks like

 Hash Join
   Hash Cond: (tmp_limitcontrats.idcontrat = paliers.idcontrat)
   ->  Seq Scan on tmp_limitcontrats
   ->  Hash
 ->  Seq Scan on paliers
   Filter: (numrange(((qtep1 + 1))::numeric, (qtep2)::numeric) <> 
'(,)'::numrange)

If you remove the WHERE condition from the second CTE, the plan becomes

 Hash Join
   Hash Cond: (tmp_limitcontrats.idcontrat = paliers.idcontrat)
   ->  Seq Scan on tmp_limitcontrats
   ->  Hash
 ->  Seq Scan on paliers

In the second case, "rangep" is never used, so PostgreSQL optimizes the
query so that it does not calculate the column at all, which avoids the
runtime error.

Yours,
Laurenz Albe




Re: Pgbackrest specifying the default DB necessary/correct way ?

2024-08-28 Thread Greg Sabino Mullane
On Wed, Aug 28, 2024 at 1:39 AM KK CHN  wrote:

> In this DB serverI have other databases  than the default  "edb"
> database. Specifying the above line aspg1-database=edb   // I am
> not sure this line is necessary  or not ?
>

The pgbackrest process needs to connect to the database, which means it
needs a user and database. You need this variable if you do not have the
default database, "postgres". If you have a database named postgres, you
can leave this out. Otherwise, yes, it is necessary.

 pg1-database=edb // specifying like this, will it block other databases
> on this server to get backed up ?   IF yes how can I overcome this ?
>

pgBackRest works on a cluster level, so *all* the databases are backed up.
Indeed, it is not possible to only backup some of the databases. It's the
whole cluster.

 ( I am just learning and exploring PgbackRest)  found online some
> reference configurations so using like this )


Probably best to stick to the official docs; this section in particular is
worth a read:

https://pgbackrest.org/user-guide-rhel.html

Cheers,
Greg


Re: Ghost data from failed FDW transactions?

2024-08-28 Thread Greg Sabino Mullane
On Tue, Aug 27, 2024 at 9:03 PM Jacob Biesinger 
wrote:

> I'm scratching my head at a few rows in the root DB, where it seems the
> corresponding tenant transaction rolled back, but the root DB transaction
> committed
>
...

> Before I jump into particulars, does this sound like expected behavior?
>

No, it sounds like something is going wrong. Your setup as described should
work to keep both sides in sync.

Through the magic of postgres_fdw, row triggers, and distributed
> transactions,
>

Can you expand on "distributed transactions" here?

Cheers,
Greg


Re: Pgbackrest specifying the default DB necessary/correct way ?

2024-08-28 Thread KK CHN
Very helpful.

On Wed, Aug 28, 2024 at 5:51 PM Greg Sabino Mullane 
wrote:

> On Wed, Aug 28, 2024 at 1:39 AM KK CHN  wrote:
>
>> In this DB serverI have other databases  than the default  "edb"
>> database. Specifying the above line aspg1-database=edb   // I am
>> not sure this line is necessary  or not ?
>>
>
> The pgbackrest process needs to connect to the database, which means it
> needs a user and database. You need this variable if you do not have the
> default database, "postgres". If you have a database named postgres, you
> can leave this out. Otherwise, yes, it is necessary.
>
>  pg1-database=edb // specifying like this, will it block other databases
>> on this server to get backed up ?   IF yes how can I overcome this ?
>>
>
> pgBackRest works on a cluster level, so *all* the databases are backed up.
> Indeed, it is not possible to only backup some of the databases. It's the
> whole cluster.
>
>  ( I am just learning and exploring PgbackRest)  found online some
>> reference configurations so using like this )
>
>
> Probably best to stick to the official docs; this section in particular is
> worth a read:
>
> https://pgbackrest.org/user-guide-rhel.html
>
> Cheers,
> Greg
>
>
>


PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread KK CHN
I am trying  pgbackrest config  on a Production Server and a Repo server (
RHEL 9.4  EPAS 16 , pgbackrest 2.52.1

I have configured   pbbackrest.conf on both machines  as per the official
docs.

>From b*oth machines password less auth works for the db user *(enterprisedb
) and repouser(postgres)

.

When I create the stanza on both the  DB server and Repo server it fails
with

connection to server socket failed  no password supplied..


Here my configs on both primary and repo server.

DB Server.
[root@db1 ~]# cat /etc/pgbackrest/pgbackrest.conf
[Repo]
pg1-path=/data/edb/as16/data
pg1-port=5444
pg1-user=enterprisedb
pg-version-force=16
pg1-database=edb

[global]
repo1-host=10.255.0.40
repo1-host-user=postgres
archive-async=y
spool-path=/var/spool/pgbackrest
log-level-console=info
log-level-file=debug
delta=y

[global:archive-get]
process-max=2

[global:archive-push]
process-max=4
[root@db1 ~]#





Reposerver
[root@dbtest ~]# cat /etc/pgbackrest/pgbackrest.conf
[Repo]
pg1-host=10.15.0.202
pg1-host-user=enterprisedb
pg1-path=/data/edb/as16/data
pg-version-force=16

[global]
repo1-path=/data/DB_BKUPS
repo1-block=y
repo1-bundle=y
repo1-retention-full=2
repo1-retention-diff=2
repo1-cipher-type=aes-256-cbc
repo1-cipher-pass=acbd
process-max=5
log-level-console=info
log-level-file=debug
start-fast=y
delta=y

[global:archive-push]
compress-level=3
[root@dbtest ~]#



*On DB Server stanza creation fails with  *
   valid_lft forever preferred_lft forever
[root@db1 ~]# sudo -u enterprisedb pgbackrest --stanza=Repo
--log-level-console=info stanza-create
2024-08-28 19:30:31.518 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=4062179-ecf39176 --log-level-console=info --log-level-file=debug
--pg1-database=edb --pg1-path=/data/edb/as16/data --pg1-port=5444
--pg1-user=enterprisedb --pg-version-force=16 --repo1-host=10.255.0.40
--repo1-host-user=postgres --stanza=Repo
*WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='edb' port=5444 user='enterprisedb'': connection to server on
socket "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied*
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2024-08-28 19:30:31.523 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@db1 ~]#








*On Repo server:  stanza creation fails with follows. *


[root@dbtest ~]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info stanza-create
2024-08-28 19:21:10.958 P00   INFO: stanza-create command begin 2.52.1:
--exec-id=350565-6e032daa --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/data/DB_BKUPS --stanza=Repo

*WARN: unable to check pg1: [DbConnectError] raised from remote-0 ssh
protocol on '10.15.0.202': unable to connect to 'dbname='edb' port=5444
user='enterprisedb'': connection to server on socket "/tmp/.s.PGSQL.5444"
failed: fe_sendauth: no password suppliedERROR: [056]: unable to find
primary cluster - cannot proceed*
   HINT: are all available clusters in recovery?
2024-08-28 19:21:12.462 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@dbtest ~]#




My DB Server pg_hba.conf  as follows


# "local" is for Unix domain socket connections only
local   all all md5
# IPv4 local connections:
hostall all 127.0.0.1/32md5
hostall all 10.0.0.0/8  md5



# IPv6 local connections:
hostall all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication all md5
hostreplication all 127.0.0.1/32md5
hostreplication all ::1/128 md5
hostreplication all 10.0.0.0/8  md5
[root@db1 ~]#




and I have   .pgpass in DB server as

[root@db1 ~]# cat /var/lib/edb/.pgpass
*:*:replication:enterprisedb:my_secret_password
[root@db1 ~]# ls -al /var/lib/edb/.pgpass
-rw---. 1 enterprisedb enterprisedb 38 Aug 28 19:01 /var/lib/edb/.pgpass
[root@db1 ~]#

WHy it complains about  no password supplied..


Any help is much appreciated.

Krishane


Re: PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread David G. Johnston
On Wednesday, August 28, 2024, KK CHN  wrote:

>
> and I have   .pgpass in DB server as
>

You assumed this mattered but I see no mention that pgBackRest consults
this file.

It seems to require the local entry in pg_hba.conf to use peer
authentication.

David J.


Re: PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread Amitabh Kant
On Wed, Aug 28, 2024 at 8:00 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, August 28, 2024, KK CHN  wrote:
>
>>
>> and I have   .pgpass in DB server as
>>
>
> You assumed this mattered but I see no mention that pgBackRest consults
> this file.
>
> It seems to require the local entry in pg_hba.conf to use peer
> authentication.
>
> David J.
>
>
Section 21.4 on this page (
https://pgbackrest.org/user-guide.html#introduction )  does seem to mention
the use of .pgpass file. I have no idea about the actual problem though.

Amitabh


Re: PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread David G. Johnston
On Wednesday, August 28, 2024, Amitabh Kant  wrote:

> On Wed, Aug 28, 2024 at 8:00 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wednesday, August 28, 2024, KK CHN  wrote:
>>
>>>
>>> and I have   .pgpass in DB server as
>>>
>>
>> You assumed this mattered but I see no mention that pgBackRest consults
>> this file.
>>
>> It seems to require the local entry in pg_hba.conf to use peer
>> authentication.
>>
>> David J.
>>
>>
> Section 21.4 on this page ( https://pgbackrest.org/user-
> guide.html#introduction )  does seem to mention the use of .pgpass file.
> I have no idea about the actual problem though.
>

Yes, postgres itself uses .pgpass so when you configure streaming
replication between two servers, something that is doable regardless of
using pgBackRest, the server-to-server connection can utilize .pgpass.

David J.


Re: Ghost data from failed FDW transactions?

2024-08-28 Thread Jacob Biesinger
On Wed, Aug 28, 2024 at 5:39 AM Greg Sabino Mullane 
wrote:

> On Tue, Aug 27, 2024 at 9:03 PM Jacob Biesinger 
> wrote:
>
>> I'm scratching my head at a few rows in the root DB, where it seems the
>> corresponding tenant transaction rolled back, but the root DB transaction
>> committed
>>
> ...
>
>> Before I jump into particulars, does this sound like expected behavior?
>>
>
> No, it sounds like something is going wrong. Your setup as described
> should work to keep both sides in sync.
>

Well, that's a plus. At least we're (probably) not using it wrong :)

There aren't many details in the docs around failure modes... is there
anything there that could cause this issue?

For example, if there were some temporary network outage that prevented
communication between the two DBs, would the FDW fail gracefully? Or if
there were some disk failure or something after the FDW signals to the
remote (root) DB to commit but then the local (tenant) DB failed to commit?
We've had a few outages over the years where we hit the `max_connections`
setting on the cluster (which would affect both the root + tenant DBs), but
connections are held for the duration of both local + remote txns, so
doesn't seem like that would affect this. We don't use pgBouncer, either on
the client -> DB or as an in-between on the DB -> DB FDW side.


>
> Through the magic of postgres_fdw, row triggers, and distributed
>> transactions,
>>
>
> Can you expand on "distributed transactions" here?
>

I just mean "using the FDW as described". It is magic that we can get
proper transactional + ACID semantics in a distributed system. The FDW
really helps our use-case.

But to go deeper, we use the javascript knex adapter and some
application-level transaction management that automatically retries a
transaction N times when it encounters serialization errors. On this
particular endpoint, the emitted SQL for the full transaction looks
something like:

BEGIN;
INSERT INTO "devices" ("orgId", "patientId", "deviceId", "data")
VALUES (
'org1',
'patient1',
'device1',
'{"id": "device1", "patientId": "patient1", "serialNumber": "12345",
"status": "active" }'
);
INSERT INTO "devices" ("orgId", "patientId", "deviceId", "data")
VALUES (
'org1',
'patient1',
'device2',
'{"id": "device2", "patientId": "patient1", "serialNumber": "67890",
"status": "active" }'
);
SELECT * FROM "rootDb"."assets";

-- execute some logic client-side, nothing touching the DB

UPDATE "rootDb"."assets" WHERE ...;
COMMIT;

With I guess the maybe-relevant bits here being that we do some additional
reading + writing to the remote / root DB (other tables) subsequently as
part of the same txn. The JS driving this also has the unfortunate
shortcoming that the two `INSERT` statements run "in parallel", meaning
there's a race to execute them (serially) through their shared
txn/connection. The ordering shouldn't matter, but this also means
that error handling (e.g., when there is a conflict with the `EXCLUDE
"serialNumber"` constraint) may not stop the second `INSERT` statement from
being attempted (and rejected by postgres). But I think that's all
client-side details that shouldn't affect the FDW txn semantics, right?


--
Jake Biesinger


On Wed, Aug 28, 2024 at 5:39 AM Greg Sabino Mullane 
wrote:

> On Tue, Aug 27, 2024 at 9:03 PM Jacob Biesinger 
> wrote:
>
>> I'm scratching my head at a few rows in the root DB, where it seems the
>> corresponding tenant transaction rolled back, but the root DB transaction
>> committed
>>
> ...
>
>> Before I jump into particulars, does this sound like expected behavior?
>>
>
> No, it sounds like something is going wrong. Your setup as described
> should work to keep both sides in sync.
>
> Through the magic of postgres_fdw, row triggers, and distributed
>> transactions,
>>
>
> Can you expand on "distributed transactions" here?
>
> Cheers,
> Greg
>
>


Re: Ghost data from failed FDW transactions?

2024-08-28 Thread Rob Sargent


> On Aug 28, 2024, at 10:18 AM, Jacob Biesinger  
> wrote:
> 
> But to go deeper, we use the javascript knex adapter and some 
> application-level transaction management that automatically retries a 
> transaction N times when it encounters serialization errors. On this 
> particular endpoint, the emitted SQL for the full transaction looks something 
> like:
> 
> BEGIN;
> INSERT INTO "devices" ("orgId", "patientId", "deviceId", "data")
> VALUES (
> 'org1',
> 'patient1',
> 'device1',
> '{"id": "device1", "patientId": "patient1", "serialNumber": "12345", 
> "status": "active" }'
> );
> INSERT INTO "devices" ("orgId", "patientId", "deviceId", "data")
> VALUES (
> 'org1',
> 'patient1',
> 'device2',
> '{"id": "device2", "patientId": "patient1", "serialNumber": "67890", 
> "status": "active" }'
> );
> SELECT * FROM "rootDb"."assets";
> 
> -- execute some logic client-side, nothing touching the DB
> 
> UPDATE "rootDb"."assets" WHERE ...;
> COMMIT;
> 
Any value in supplying a single insert statement a la (less back and forth 
perhaps?):
BEGIN;
INSERT INTO "devices" ("orgId", "patientId", "deviceId", "data")
VALUES (
'org1',
'patient1',
'device1',
'{"id": "device1", "patientId": "patient1", "serialNumber": "12345", "status": 
"active" }’),
(
'org1',
'patient1',
'device2',
'{"id": "device2", "patientId": "patient1", "serialNumber": "67890", "status": 
"active" }'
)



Re: Ghost data from failed FDW transactions?

2024-08-28 Thread Jacob Biesinger
>
> Any value in supplying a single insert statement a la (less back and forth
> perhaps?):
>

Yes, absolutely that would be better. This particular endpoint has some
ancient + crufty code backing it (migrated from a NoSQL DB with a
db-agnostic shim that we're slowly replacing). The old code likes doing
things client-side instead of being sane about SQL semantics.

But I don't think that would affect the issue here, right?

--
Jake Biesinger


PgbackRest stanza creation : on DB server or both DB and Repo server ?

2024-08-28 Thread KK CHN
List,

 I am configuring the   Pgbackrest  ( RHEL9.4 ,  EPAS 16  , PgbackRest
2.52.1 )  on two servers.(Database cluster server and   Repo Server).

Query.1  Do I need to create stanza on both   servers   ( DB server EPAS16
User : enterprisedb,as well as on  RepoServer   user: postgres   )

1.  only creating stanza  either at  DB Server  but not on  RepoServer is
sufficient?  OR  I have to create the  stanza on both servers as follows ?

on DB server   ( RHEL 9.4, EPAS 16  : DB user is : enterprisedb  )
]#sudo -u enterprisedb pgbackrest --stanza=Repo --log-level-console=info
stanza-create

Initially It complains about another  pgbackrest running  and aborted the
stanza-creation
 second attempt  : Without complaining about  anything  the stanza creaton
successful  :)   I couldn't understand why it complained about  another
pgbackrest running in the first place ?
( before running stanza creation on DB server,  I first tried the stanza
creation on  RepoServer as   ]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info stanza-create



On Repo server : ( postgres user for Repo server, but no PG installed only
user  postgres is created for  backup purpose)

]# #sudo -u postgres pgbackrest --stanza=Repo --log-level-console=info
stanza-create

But no success(on multiple stanza creation attempts)   :  It always aborted
with a message  another pgbackrest is running ..


So  Only   stanza creation has to  performed  either  on  DB Server or
RepoServe but not on both servers ?

Kindly shed some light on this.

Thank you,
Krishane


Re: PgbackRest : Stanza creation fails on DB Server and Repo Server

2024-08-28 Thread KK CHN
Thank you all for the great help ..

I couldn't get a chance toRestart the DB Cluster after making the
changes highlighted.  Correct me if I am wrong (  production server, down
time requested.  ) after editing the pg_hba.conf  on   DB server as
follows
local   all all trust
# IPv4 local connections:
hostall all 127.0.0.1/32md5
hostall all 10.0.0.0/8  md5



# IPv6 local connections:
hostall all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication all  peer
#local   replication all md5
hostreplication all 127.0.0.1/32md5
hostreplication all ::1/128 md5
hostreplication all 10.0.0.0/8  md5
[root@db1 edb]#




But a work around  seems worked as follows :

[root@db1 edb]# cat .pgpass
*:*:replication:enterprisedb:my_password

changed to

[root@db1 edb]# cat .pgpass
*:*:*:enterprisedb:password


For those struggling with this issuethis may help ...

on the DB server's   enterprisedb user's  home directory ( Here it is
/var/lib/edb/  )

[root@db1 edb]# cat .pgpass
*:*:*:enterprisedb:password
[root@db1 edb]# ls -al .pgpass
-rw---. 1 enterprisedb enterprisedb 28 Aug 29 09:26 .pgpass
[root@db1 edb]#


[root@db1 edb]# ls -al .pgpass
-rw---. 1 enterprisedb enterprisedb 28 Aug 29 09:26 .pgpass
[root@db1 edb]#


On Wed, Aug 28, 2024 at 8:28 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, August 28, 2024, Amitabh Kant  wrote:
>
>> On Wed, Aug 28, 2024 at 8:00 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Wednesday, August 28, 2024, KK CHN  wrote:
>>>

 and I have   .pgpass in DB server as

>>>
>>> You assumed this mattered but I see no mention that pgBackRest consults
>>> this file.
>>>
>>> It seems to require the local entry in pg_hba.conf to use peer
>>> authentication.
>>>
>>> David J.
>>>
>>>
>> Section 21.4 on this page (
>> https://pgbackrest.org/user-guide.html#introduction )  does seem to
>> mention the use of .pgpass file. I have no idea about the actual problem
>> though.
>>
>
> Yes, postgres itself uses .pgpass so when you configure streaming
> replication between two servers, something that is doable regardless of
> using pgBackRest, the server-to-server connection can utilize .pgpass.
>
> David J.
>


PgBackRest Full backup and N/W reliability

2024-08-28 Thread KK CHN
List,

I am doing a full backup  using PgBackRest from a production server to
Reposerver.

My connection is  IPSec VPN over ILL  ( 8 Mbps link) between the
 Production DB Server and  the remote RepoServer.

I understood the  bottleneck of 8 Mbps between servers. (Server NICs 10Gbps
and switch)

Query : I have started the  backup command  and it is running (may go for
hours and days as link speed is minimal) .
If the link disconnected or Network error  happens before completion of the
backup command  Definitely the option is to reissue the  backup command
again.

If so, does the backup process start  again from  scratch ?   or it resumes
from  where the backup process is stopped   ?

If it starts from scratch I am afraid that  I can''t complete the initial
full backup never :(

  Or is there a work around if the network connectivity is lost in between
?


Any suggestions much appreciated

Thank you ,
Krishane

[root@dbtest pgbackrest]# sudo -u postgres pgbackrest --stanza=Repo
--log-level-console=info backup

2024-08-29 10:55:27.729 P00   INFO: backup command begin 2.52.1: --delta
--exec-id=523103-56943986 --log-level-console=info --log-level-file=debug
--pg1-host=10.15.0.202 --pg1-host-user=enterprisedb
--pg1-path=/data/edb/as16/data --pg-version-force=16 --process-max=5
--repo1-block --repo1-bundle --repo1-cipher-pass=
--repo1-cipher-type=aes-256-cbc --repo1-path=/data/DB_BKUPS
--repo1-retention-diff=2 --repo1-retention-full=2 --stanza=Repo --start-fast
WARN: no prior backup exists, incr backup has been changed to full
2024-08-29 10:55:30.589 P00   INFO: execute non-exclusive backup start:
backup begins after the requested immediate checkpoint completes
2024-08-29 10:55:31.543 P00   INFO: backup start archive =
00010085004C, lsn = 85/4C0007F8
2024-08-29 10:55:31.543 P00   INFO: check archive for prior segment
00010085004B


ON Repo Server:
[root@dbtest backup]# date
Thursday 29 August 2024 10:58:08 AM IST
[root@dbtest backup]# du -h
165M./Repo
165M

[root@dbtest backup]# date
Thursday 29 August 2024 11:37:03 AM IST
[root@dbtest backup]# du -h
1.9G./Repo
1.9G

ON  Production Server/data/edb/as16/datadirectory size is   500 GB


Re: Significant Execution Time Difference Between PG13.14 and PG16.4 for Query on information_schema Tables.

2024-08-28 Thread David Rowley
On Wed, 28 Aug 2024 at 18:59, Justin Clift  wrote:
> Any idea who normally does those, and if it would be reasonable to add
> test(s) for the internal information tables?

These tend to get added along with features and along with of bug
fixes.  I imagine any tests for the information_schema views would be
for the results of the views rather than for the expected plans.
However, that seems very separate from this as the bug has nothing to
do with information_schema. It just happens to be a query to an
information_schema view that helped highlight the bug.  Those views
are often quite complex and so are the resulting plans.  With tests
checking the expected EXPLAIN output, it's much better to give these a
very narrow focus otherwise the expected output could be too unstable
and the purpose of the test harder to determine for anyone working on
a new patch which results in a plan change of a preexisting test.
I've seen tests before rendered useless by people blindly accepting
the plan change without properly determining what the test is supposed
to be testing. That's much more likely to happen when the purpose of
the test is less clear due to some unwieldy and complex expected plan.
I managed to get a reproducer for this down to something quite simple.
Probably that or something similar would be a better test to make sure
this bug stays gone.

David