LibPQ: PQresultMemorySize as proxy to transfered bytes

2023-06-28 Thread Dominique Devienne
Hi,

To measure throughput, I'm timing exec time spent in LibPQ,
against the size of the result-set in bytes, as reported
by PQresultMemorySize().

*EXEC:  7x (  130,867 rows, 54,921,532 bytes) in   0.305s
(171.8 MB/s)*
*EXEC:  8x (  180,079 rows, 95,876,047 bytes) in   0.493s
(185.5 MB/s)*
*EXEC:  9x (  224,253 rows,371,663,836 bytes) in   2.986s
(118.7 MB/s)*

The problem is that I have only a 1GbE network link, so theoretically,
max-throughput should be around 125MB/s, which the 1st two runs exceed.

These 3 runs access the same schema, doing full scans of a few "real data"
tables,
and the last one accesses more/larger bytea columns. These are plain
SELECTs in
binary mode using normal sync execution (no cursors, COPY, single-row mode,
PIPELINE, etc...)

Obviously from these results, I now realize PQresultMemorySize() returns
something larger than what went across the network. Can someone explain
how so? And if there's a better proxy to programmatically know the network
traffic exchanged on the connection's socket, that's cross-platform?
Obviously
libpq itself knows, but I don't see any way to access that info.

Perhaps tracing might? But will that incur overhead?
I'd appreciate any insight. Thanks, --DD


Replication between different 15.x minor versions ok?

2023-06-28 Thread David Tinker
Is it ok to use physical replication between different 15.x minor releases
(on Ubuntu 22.04)? I haven't been able to find a definitive answer. Thanks.


Re: Replication between different 15.x minor versions ok?

2023-06-28 Thread Inzamam Shafiq
Streaming replication between minor versions works perfectly fine. However, 
it's better to use same versions but different minor versions also work.

Sent from Outlook for Android


From: David Tinker 
Sent: Wednesday, June 28, 2023 3:02:22 pm
To: pgsql-general@lists.postgresql.org 
Subject: Replication between different 15.x minor versions ok?

Is it ok to use physical replication between different 15.x minor releases (on 
Ubuntu 22.04)? I haven't been able to find a definitive answer. Thanks.




Re: Replication between different 15.x minor versions ok?

2023-06-28 Thread Laurenz Albe
On Wed, 2023-06-28 at 12:01 +0200, David Tinker wrote:
> Is it ok to use physical replication between different 15.x minor releases 
> (on Ubuntu 22.04)? I haven't been able to find a definitive answer. 

Yes, that is OK.

Yours,
Laurenz Albe




Re: LibPQ: PQresultMemorySize as proxy to transfered bytes

2023-06-28 Thread Tom Lane
Dominique Devienne  writes:
> Obviously from these results, I now realize PQresultMemorySize() returns
> something larger than what went across the network. Can someone explain
> how so?

That number is the total space actually requested from malloc() for
the PGresult object.  But we request space in blocks (typically 2KB
each), so there's some overhead due to fields not exactly filling
a block, unused space in the last block, etc.  If you're testing
with very small result sets (say a couple hundred KB) then the
value could be pretty far off what went across the network.
A larger result set should show less percentage error.

regards, tom lane




pgbouncer

2023-06-28 Thread Rita
seems like I may need to deploy pgbouncer for my webapp. should i deploy it
on the db server or on the webserver?
-- 
--- Get your facts first, then you can distort them as you please.--


Re: LibPQ: PQresultMemorySize as proxy to transfered bytes

2023-06-28 Thread Tom Lane
I wrote:
> That number is the total space actually requested from malloc() for
> the PGresult object.  But we request space in blocks (typically 2KB
> each), so there's some overhead due to fields not exactly filling
> a block, unused space in the last block, etc.  If you're testing
> with very small result sets (say a couple hundred KB)

Sigh, obviously I meant a couple hundred *bytes* there.  -ENOCAFFEINE

regards, tom lane




Re: pgbouncer

2023-06-28 Thread Sebastiaan Mannem
It greatly depends on what you want to achieve. Both have pro’s and con’s.
What do you need pgbouncer for?

Verstuurd vanaf mijn iPhone

> Op 28 jun. 2023 om 13:19 heeft Rita  het volgende 
> geschreven:
> 
> 
> 
> seems like I may need to deploy pgbouncer for my webapp. should i deploy it 
> on the db server or on the webserver?
> -- 
> --- Get your facts first, then you can distort them as you please.--




Re: pgbouncer

2023-06-28 Thread Laurenz Albe
On Wed, 2023-06-28 at 07:19 -0400, Rita wrote:
> seems like I may need to deploy pgbouncer for my webapp. should i deploy it 
> on the db server or on the webserver?

On the database server.  You don't want network latency between pgbouncer and 
PostgreSQL,
so that the connections can be active as much as possible.
That will improve throughput.

Yours,
Laurenz Albe




Re: pgbouncer

2023-06-28 Thread Giovanni Biscontini
On the DB server: the bouncer is made to optimize many and fast connections
to the DB and fasten the throughput, so You've to have the shortest way to
go to the data.
One important thing: remember that different user raise different
connections so be careful to choose which application you'll direct to
pgbouncer.
A couple more: be careful to choose the right pool_mode and tune the
right server_connect_timeout value.
all the best, Giovanni

Il giorno mer 28 giu 2023 alle ore 13:19 Rita  ha
scritto:

>
> seems like I may need to deploy pgbouncer for my webapp. should i deploy
> it on the db server or on the webserver?
> --
> --- Get your facts first, then you can distort them as you please.--
>


-- 

*Best Regards*

*Dott. Giovanni Biscontini*

Software Developer & Database Architect

Web: https://www.es2000.it 


_
Privacy e riservatezza: il presente messaggio, così come i relativi
allegati, contengono dati ed informazioni da considerarsi strettamente
riservate ed è indirizzato esclusivamente al destinatario sopra indicato,
il quale è l'unico autorizzato a trattarlo in osservanza delle norme del
Regolamento UE 2016/679 (RGPD). Chiunque ricevesse questo messaggio per
errore di evitare di copiarlo, divulgarlo, distribuirlo a terzi e di dare
notizia al mittente dell’errato invio, distruggendo poi l'eventuale copia
cartacea e la copia in formato elettronico.


Re: pgbouncer

2023-06-28 Thread Rita
I have a dedicated server. I have 3 additional physical application servers
which hit the database. I periodically get the application server not able
to reach the database server (is server on port 5432 listening? message).
Looking at some metrics, seems like I am pulling a lot of tuples (300k or
so) during that time.


On Wed, Jun 28, 2023 at 8:23 AM Sebastiaan Mannem  wrote:

> It greatly depends on what you want to achieve. Both have pro’s and con’s.
> What do you need pgbouncer for?
>
> Verstuurd vanaf mijn iPhone
>
> > Op 28 jun. 2023 om 13:19 heeft Rita  het volgende
> geschreven:
> >
> > 
> >
> > seems like I may need to deploy pgbouncer for my webapp. should i deploy
> it on the db server or on the webserver?
> > --
> > --- Get your facts first, then you can distort them as you please.--
>


-- 
--- Get your facts first, then you can distort them as you please.--


Re: typical active table count?

2023-06-28 Thread Ben Chobot

Jeremy Schneider wrote on 6/27/23 11:47 AM:

Thank Ben, not a concern but I'm trying to better understand how common
this might be. And I think sharing general statistics about how people
use PostgreSQL is a great help to the developers who build and maintain it.

One really nice thing about PostgreSQL is that two quick copies of
pg_stat_all_tables and you can easily see this sort of info.

If you have a database where more than 100 tables are updated within a
10 second period - this seems really uncommon to me - I'm very curious
about the workload.


Well, in our case we have a SaaS model where a moderately complicated 
schema is replicated hundreds of times per db. It doesn't take much load 
to end up scattering writes across many tables (not to mention their 
indices). We do have table partitioning too, but it's a relatively small 
part of our schema and the partitioning is done by date, so we really 
only have one hot partition at a time. FWIW, most of our dbs have 32 cores.


All that aside, as others have said there are many reasonable ways to 
reach the threshold you have set.


Re: pgbouncer

2023-06-28 Thread Ben Chobot

Laurenz Albe wrote on 6/28/23 5:27 AM:

On Wed, 2023-06-28 at 07:19 -0400, Rita wrote:

seems like I may need to deploy pgbouncer for my webapp. should i deploy it on 
the db server or on the webserver?

On the database server.  You don't want network latency between pgbouncer and 
PostgreSQL,
so that the connections can be active as much as possible.
That will improve throughput.


This is true, but as somebody who also deploys a fleet of stand-alone 
pgBouncers in front of his dbs, a good reason to have a pgBouncer-based 
traffic spigot before traffic hits your db server is that you can pause 
traffic from the client to the db during maintenance, up to and 
including replacing your entire db server, and all clients see is a 
period of the db acting slow.


For applications that struggle with connection management, this can be 
very valuable.


need explanation about an explain plan

2023-06-28 Thread Marc Millas
Hi,

Postgres 14.2  (for one more month)
The explain plan and request is here:
https://explain.depesz.com/s/Opk0

The big table is split in around 130 partitions, one by month.
the ladate column is the partition key, and it does have 1 value for each
partition.
there is a




Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com


Re: need explanation about an explain plan

2023-06-28 Thread Marc Millas
Sorry, gmail sent uncompleted message

Hi,

Postgres 14.2  (for one more month)
The explain plan and request is here:
https://explain.depesz.com/s/Opk0

The big table (10 billions raws) is split in around 130 partitions, one by
month.
the ladate column is the partition key, and it does have 1 value for each
partition.
there is an index on the numfic column. the distribution of values for that
column may differ before and after 2019 january.

The request is executed in a loop for all ten years of data, year 1,2,3,
then year2,3,4 etc
that request is also executed for columns other than cod, one column at a
time (there are 107 columns...)

I dont understand why the planner use a filter on the date, as its already
within the partition structure.

Thanks,

Marc MILLAS




On Wed, Jun 28, 2023 at 5:22 PM Marc Millas  wrote:

> Hi,
>
> Postgres 14.2  (for one more month)
> The explain plan and request is here:
> https://explain.depesz.com/s/Opk0
>
> The big table is split in around 130 partitions, one by month.
> the ladate column is the partition key, and it does have 1 value for each
> partition.
> there is a
>
>
>
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>


Re: need explanation about an explain plan

2023-06-28 Thread Laurenz Albe
On Wed, 2023-06-28 at 17:29 +0200, Marc Millas wrote:
> https://explain.depesz.com/s/Opk0
> 
> The big table (10 billions raws) is split in around 130 partitions, one by 
> month.
> the ladate column is the partition key, and it does have 1 value for each 
> partition.
> there is an index on the numfic column. the distribution of values for that 
> column may differ before and after 2019 january.
> 
> The request is executed in a loop for all ten years of data, year 1,2,3, then 
> year2,3,4 etc
> that request is also executed for columns other than cod, one column at a 
> time (there are 107 columns...)
> 
> I dont understand why the planner use a filter on the date, as its already 
> within the partition structure.

If the grouping column is the partitioning key, try to set 
enable_partitionwise_join = on.

Otherwise, your best bet is to create an index that covers both WHERE 
conditions,
or a covering index, which will get you the best result:

  CREATE INDEX ON table1 (numfic, ladate) INCLUDE (cod, nb_obs);

Yours,
Laurenz Albe




Re: need explanation about an explain plan

2023-06-28 Thread Marc Millas
On Wed, Jun 28, 2023 at 6:48 PM Laurenz Albe 
wrote:

> On Wed, 2023-06-28 at 17:29 +0200, Marc Millas wrote:
> > https://explain.depesz.com/s/Opk0
> >
> > The big table (10 billions raws) is split in around 130 partitions, one
> by month.
> > the ladate column is the partition key, and it does have 1 value for
> each partition.
> > there is an index on the numfic column. the distribution of values for
> that column may differ before and after 2019 january.
> >
> > The request is executed in a loop for all ten years of data, year 1,2,3,
> then year2,3,4 etc
> > that request is also executed for columns other than cod, one column at
> a time (there are 107 columns...)
> >
> > I dont understand why the planner use a filter on the date, as its
> already within the partition structure.
>
> If the grouping column is the partitioning key, try to set
> enable_partitionwise_join = on.
>
> Otherwise, your best bet is to create an index that covers both WHERE
> conditions,
> or a covering index, which will get you the best result:
>
>   CREATE INDEX ON table1 (numfic, ladate) INCLUDE (cod, nb_obs);
>

Hi Laurenz, as said, in each partition there is only one value for ladate.
I don't understand the point of creating an index for the tens of millions
rows of each partition, index wich will contain the very same unique value
within all of the index.
I did set enable_partitionwise_join = 'on' and retry,
but it doesn't seem to change the plan.


> Yours,
> Laurenz Albe
>


Re: need explanation about an explain plan

2023-06-28 Thread Laurenz Albe
On Wed, 2023-06-28 at 19:25 +0200, Marc Millas wrote:
> Hi Laurenz, as said, in each partition there is only one value for ladate.

The planner doesn't seem to take that into account.

Yours,
Laurenz Albe




recovery_target_action = shutdown not working on PG11 during PITR restore

2023-06-28 Thread Maher Beg
Hello!

I'm attempting to restore a database (via pgbackrest) on PG11.20 and have
the database shutdown when all of the WAL archives have finished applying.
In my recovery.conf file I have recovery_target_action = 'shutdown' with an
appropriate recovery target time and restore command setup. hot_standby is
set to on.

When Postgres starts up it shows the correct recovery parameters in the
debug logs but after restoring and applying all the WAL archives ends up
promoting itself and has exited recovery mode (confirmed via psql select
pg_is_in_recovery()). I also see recovery.conf is renamed to recovery.done

I've tried changing the target action to pause and that still results in
promotion, as well as setting it to something invalid which causes Postgres
to correctly crash. It appears to be parsing correctly, but maybe I'm
missing an additional configuration or startup parameter to have PG11
restore and shutdown correctly. I've also tried hot_standby on and off, as
well as messing with standby_mode.

This same process with slight tweaks works just fine for PG12-PG15. Is
there anything else I should be checking or testing?

-Maher


Wrong passwords allowed

2023-06-28 Thread Atul Kumar
Hi,

I am having a strange issue that I have a postgres intance running on
version 12 in centos 7.

So whenever I am trying to login in the database using psql -W option, I am
entering the wrong passwords for that user, even after that I am logging in
the database.

So could you help me in telling why is this strange behavior happening.

It is taking any passwords for any user be it right password or wrong
password.

What could be the problem?


Regards.


Re: Wrong passwords allowed

2023-06-28 Thread Adrian Klaver

On 6/28/23 17:45, Atul Kumar wrote:

Hi,

I am having a strange issue that I have a postgres intance running on 
version 12 in centos 7.


So whenever I am trying to login in the database using psql -W option, I 
am entering the wrong passwords for that user, even after that I am 
logging in the database.


So could you help me in telling why is this strange behavior happening.

It is taking any passwords for any user be it right password or wrong 
password.


What could be the problem?


Not reading this:

https://www.postgresql.org/docs/current/auth-pg-hba-conf.html

and this:

https://www.postgresql.org/docs/current/auth-trust.html

or this:

https://www.postgresql.org/docs/current/auth-peer.html




Regards.





--
Adrian Klaver
adrian.kla...@aklaver.com