Re: About the stability of COPY BINARY data

2024-11-07 Thread Adrian Klaver

On 11/7/24 09:55, Dominique Devienne wrote:

On Thu, Nov 7, 2024 at 6:39 PM Daniel Verite  wrote:

 Dominique Devienne wrote:

Also, does the code for per-type _send() and _recv() functions
really change across versions of PostgreSQL? How common are
instances of such changes across versions? Any examples of such
backward-incompatible changes, in the past?


For the timestamp types, I think these functions were
sending/expecting float8 (before version 7.3), and then float8 or
int64 depending on the server configuration up until 9.6, and since
then int64 only.
The same for the "time" field of the interval type.
There is still an "integer_datetimes" GUC reflecting this.


Thanks. So it did happen in a distant past.
Anything below 14 is of no concern to me though.
So again, it does sound like changes are unlikely.


Yeah that is implied by:

https://www.postgresql.org/docs/current/pgupgrade.html

"Major PostgreSQL releases regularly add new features that often change 
the layout of the system tables, but the internal data storage format 
rarely changes. "


The COPY warning is there as heads up that it is a possibility.



And I haven't seen anything not network-byte-order,
as far architecture is concerned.




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: About the stability of COPY BINARY data

2024-11-07 Thread Dominique Devienne
On Thu, Nov 7, 2024 at 7:04 PM Adrian Klaver  wrote:
> On 11/7/24 09:55, Dominique Devienne wrote:
> > On Thu, Nov 7, 2024 at 6:39 PM Daniel Verite  
> > wrote:
> >>  Dominique Devienne wrote:
> >>> Also, does the code for per-type _send() and _recv() functions
> >>> really change across versions of PostgreSQL? How common are
> >>> instances of such changes across versions? Any examples of such
> >>> backward-incompatible changes, in the past?
> >>
> >> For the timestamp types, I think these functions were
> >> sending/expecting float8 (before version 7.3), and then float8 or
> >> int64 depending on the server configuration up until 9.6, and since
> >> then int64 only.
> >> The same for the "time" field of the interval type.
> >> There is still an "integer_datetimes" GUC reflecting this.
> >
> > Thanks. So it did happen in a distant past.
> > Anything below 14 is of no concern to me though.
> > So again, it does sound like changes are unlikely.
>
> Yeah that is implied by:
>
> https://www.postgresql.org/docs/current/pgupgrade.html
>
> "Major PostgreSQL releases regularly add new features that often change
> the layout of the system tables, but the internal data storage format
> rarely changes. "
>
> The COPY warning is there as heads up that it is a possibility.
> >
> > And I haven't seen anything not network-byte-order,
> > as far architecture is concerned.

But the COPY BINARY format and "the internal data storage format"
are two separate things Adrian, AFAIK.

Using binds and result sets in binary mode is part of the protocol in a way,
and not an internal implementation detail, like internal format for
tables on disk.

I'm sure I'm not the only one that using binary mode for PostgreSQL,
and any change in _send and _recv functions across versions, or have their
results be architecture dependent, would break many client codes, if
it happened.
And COPY BINARY's outer format is also public and documented, so can't
change either.

That's why I'm insisting on that phrase in the documentation, which
gives the wrong impressions IMHO.
Shouldn't it be removed or amended? Thanks, --DD




Re: About the stability of COPY BINARY data

2024-11-07 Thread Adrian Klaver

On 11/6/24 08:20, Dominique Devienne wrote:

From https://www.postgresql.org/docs/current/sql-copy.html:

|> binary-format file is less portable across machine architectures
and PostgreSQL versions

In my experience, the binary encoding of binding/resultset/copy is
endian neutral (network byte order), so what is the less portable
across machine architectures that warning about?

Also, does the code for per-type _send() and _recv() functions really change
across versions of PostgreSQL? How common are instances of such
changes across versions? Any examples of such backward-incompatible
changes, in the past?

The binary data contains OIDs, but if sticking to built-in types,
which OIDs are unlikely to change across versions?

I'm obviously storing COPY BINARY data (we have lots of bytea
columns), and I wonder how bad it is long term, and across PostgreSQL
versions.


If I where to hazard a guess this plays a part:

https://www.postgresql.org/docs/current/sql-copy.html

"To determine the appropriate binary format for the actual tuple data 
you should consult the PostgreSQL source, in particular the *send and 
*recv functions for each column's data type (typically these functions 
are found in the src/backend/utils/adt/ directory of the source 
distribution)."




Thanks for any insights, --DD




--
Adrian Klaver
adrian.kla...@aklaver.com





How to recover correctly master and replica using backup made by pg_basebackup?

2024-11-07 Thread Evgeniy Ratkov

Hello.

I try to use pg_basebackup to make backup and recover master and replica 
from it.
I recover master with action "promote". Next, I recover replica from the 
same
backup with action "shutdown". After it, I start replica with configured 
connection
and replication slot to the master. I get error, because replica reads 
and applies

all WAL of irrelevant timeline, which contains logs after restore point.

I wrote Perl script for pg_basebackup's TAP tests. It is attached as 
file "050_check_replication.pl".
It works on PostgreSQL 17, commit 
7d85d87f4d5c35fd5b2d38adaef63dfbfa542ccc after
applying small patch, which adds possibility to start node with shutdown 
option.

Patch is attached as file "add_node_start_with_shutdown.diff".

Script generates situation when timeline changes several times. Small 
schema may

explain it:

Timeline 1:
  Create the first backup
  Create restore point 'rp_repl_bug'
Timeline 1 -> 2:
  Recovery with promote using the first backup and 'rp_repl_bug'
  Create the second backup
  Create restore point 'rp_repl_bug2'
Timeline 2 -> 3:
  Recovery with promote using the second backup and 'rp_repl_bug2'
Timeline 1 -> 4:
  Recovery with promote using the first backup and 'rp_repl_bug'

After it I get "0004.history" file, which looks strange:
1    0/390    at restore point "rp_repl_bug"
2    0/590    at restore point "rp_repl_bug2"
3    0/390    at restore point "rp_repl_bug"

Next, I try to restore replica from the first backup. Replica did not 
find WAL for

timeline 4 and downloaded and applied WAL for timeline 2.

cp: cannot stat '${ARCHIVE_PATH}/00040003': No such file 
or directory

LOG:  redo starts at 0/228
cp: cannot stat '${ARCHIVE_PATH}/00040003': No such file 
or directory
'${ARCHIVE_PATH}/00020003' -> 
'${REPLICA_PGDATA}/pg_wal/RECOVERYXLOG'

LOG:  restored log file "00020003" from archive
LOG:  completed backup recovery with redo LSN 0/228 and end LSN 
0/2000120

LOG:  consistent recovery state reached at 0/2000120
LOG:  recovery stopping at restore point "rp_repl_bug", time 2024-11-05 
22:00:16.037126+03


Than, at the next replica's start (with configured connection to the 
master and
replication slot) replica can not start receiving WAL from master, 
because it
applied all WAL from timeline 2 including all lines after LSN, where it 
should

switch timeline:

LOG:  entering standby mode
LOG:  redo starts at 0/228
LOG:  consistent recovery state reached at 0/390
LOG:  database system is ready to accept read-only connections
LOG:  started streaming WAL from primary at 0/400 on timeline 4
FATAL:  could not receive data from WAL stream: ERROR:  requested 
starting point 0/400 is ahead of the WAL flush position of this 
server 0/3002000


If I restart replica, it will not start:
LOG:  database system was shut down in recovery at 2024-11-05 22:00:18 MSK
LOG:  entering standby mode
FATAL:  requested timeline 4 is not a child of this server's history
DETAIL:  Latest checkpoint is at 0/390 on timeline 2, but in the 
history of the requested timeline, the server forked off from that 
timeline at 0/590.

LOG:  startup process (PID 2384880) exited with exit code 1

Questions:

1) Why does mirror not switch to the next timeline when it get LSN in 
WAL, which

there is in the history file? And why does mirror apply all WAL?
2) Why does 0004.history contain info about switching from 2 and 3 
timelines

even if the 4 timeline was switched from the first timeline?

Thank you in advance.
diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm
index 007571e948..e18eb3aace 100644
--- a/src/test/perl/PostgreSQL/Test/Cluster.pm
+++ b/src/test/perl/PostgreSQL/Test/Cluster.pm
@@ -1112,7 +1112,7 @@ instead return a true or false value to indicate success or failure.
 
 =cut
 
-sub start
+sub _start
 {
 	my ($self, %params) = @_;
 	my $port = $self->port;
@@ -1150,11 +1150,23 @@ sub start
 		BAIL_OUT("pg_ctl start failed") unless $params{fail_ok};
 		return 0;
 	}
+}
 
+sub start
+{
+	my ($self) = @_;
+	$self->_start;
 	$self->_update_pid(1);
 	return 1;
 }
 
+sub start_with_enabled_shutdown
+{
+	my ($self) = @_;
+	$self->_start;
+	return 1;
+}
+
 =pod
 
 =item $node->kill9()


050_check_replication.pl
Description: Perl program


Re: Trouble using pg_rewind to undo standby promotion

2024-11-07 Thread Craig McIlwee
>
>
>> Are you archiving WAL on the promoted machine in a way that your
>> restore_command can find it? Check archive_command and archive_mode on the
>> promoted machine.
>>
>
> No, the promoted machine is not archiving.  How should that work?  Is it
> OK for a log shipping standby that uses restore_command to also push to the
> same directory with an archive_command or would that cause issues of trying
> to read and write the same file simultaneously during WAL replay?  Or
> should I be setting up an archive_command that pushes to a separate
> directory and have a restore_command that knows to check both locations?
>
> Hmm, as I write that out, I realize that I could use archive_mode = on
> instead of archive_mode = always to avoid the potential for read/write
> conflicts during WAL replay.  I can try this later and report back.
>

Setting archive_mode = on and a restore_command that reads from the WAL
archive did the trick.  With those changes in place, I was able to
successfully run pg_rewind and get the promoted standby back onto timeline
1.  Thanks for the tips.

Craig


Re: About the stability of COPY BINARY data

2024-11-07 Thread Daniel Verite
Dominique Devienne wrote:

> Also, does the code for per-type _send() and _recv() functions
> really change across versions of PostgreSQL? How common are
> instances of such changes across versions? Any examples of such
> backward-incompatible changes, in the past?

For the timestamp types, I think these functions were
sending/expecting float8 (before version 7.3), and then float8 or
int64 depending on the server configuration up until 9.6, and since
then int64 only.
The same for the "time" field of the interval type.
There is still an "integer_datetimes" GUC reflecting this.


Best regards,
-- 
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite




Re: About the stability of COPY BINARY data

2024-11-07 Thread Dominique Devienne
On Thu, Nov 7, 2024 at 5:37 PM Adrian Klaver  wrote:
>
> On 11/6/24 08:20, Dominique Devienne wrote:
> >>From https://www.postgresql.org/docs/current/sql-copy.html:
> > |> binary-format file is less portable across machine architectures
> > and PostgreSQL versions
> >
> > In my experience, the binary encoding of binding/resultset/copy is
> > endian neutral (network byte order), so what is the less portable
> > across machine architectures that warning about?
> >
> > Also, does the code for per-type _send() and _recv() functions really change
> > across versions of PostgreSQL? How common are instances of such
> > changes across versions? Any examples of such backward-incompatible
> > changes, in the past?
> >
> > The binary data contains OIDs, but if sticking to built-in types,
> > which OIDs are unlikely to change across versions?
> >
> > I'm obviously storing COPY BINARY data (we have lots of bytea
> > columns), and I wonder how bad it is long term, and across PostgreSQL
> > versions.
>
> If I where to hazard a guess this plays a part:
>
> https://www.postgresql.org/docs/current/sql-copy.html
>
> "To determine the appropriate binary format for the actual tuple data
> you should consult the PostgreSQL source, in particular the *send and
> *recv functions for each column's data type (typically these functions
> are found in the src/backend/utils/adt/ directory of the source
> distribution)."

Hi Adrian.

Well, sure. The questions above are whether those type-specific formats are:
1) architecture dependent. (that's not my experience).
2) change across PostgreSQL versions.

Not what the actual formats are. --DD

PS: I'm surprised I didn't get answers. Seems to me to doc is overly "careful"
about COPY BINARY's stability, thus my asking for confirmation here.




Re: About the stability of COPY BINARY data

2024-11-07 Thread Dominique Devienne
On Thu, Nov 7, 2024 at 6:39 PM Daniel Verite  wrote:
> Dominique Devienne wrote:
> > Also, does the code for per-type _send() and _recv() functions
> > really change across versions of PostgreSQL? How common are
> > instances of such changes across versions? Any examples of such
> > backward-incompatible changes, in the past?
>
> For the timestamp types, I think these functions were
> sending/expecting float8 (before version 7.3), and then float8 or
> int64 depending on the server configuration up until 9.6, and since
> then int64 only.
> The same for the "time" field of the interval type.
> There is still an "integer_datetimes" GUC reflecting this.

Thanks. So it did happen in a distant past.
Anything below 14 is of no concern to me though.
So again, it does sound like changes are unlikely.

And I haven't seen anything not network-byte-order,
as far architecture is concerned.




Re: Advice on cluster architecture for two related, but distinct, use cases

2024-11-07 Thread Zahid Rahman
Perhaps a 14 minute investment in this article may prove fruitful.

https://medium.com/@martin.hodges/adding-a-postgres-high-availability-database-to-your-kubernetes-cluster-634ea5d6e4a1



On Thu, 7 Nov 2024, 21:06 Matthias Leisi,  wrote:

> Dear all,
>
> (This is a follow-up to a question I asked almost exactly a year ago,
> https://postgrespro.com/list/thread-id/2670756#726f3765-858c-4ac0-a7b0-5cb6720e4...@leisi.net
>  -
> the requirements have changed since then, and the platform has changed from
> OpenBSD to Linux, which may make some things easier.)
>
>
> I’m looking for advice on Postgres cluster architecture(s) for two related
> but distinct use cases. Ideally, the approaches for the two use cases would
> not differ too widely.
>
> The goal of clustering is low RPO (I guess we need sync clustering) and
> RTO (ideally almost-instant failover, but a failover process of up to a
> minute in the worst case could be acceptable); throughput is not a concern
> (it’s relatively low transaction volume except for some often-written
> statistics data, which is still moderate). Latency (due to the distance
> between datacenters for georedundancy) is a fact we are willing to accept.
>
>
> The first use case is in an environment under our own control (and where
> eg a DBA could intervene). We can theoretically run any number of cluster
> instances, but assume we would use an even number (split over the two
> datacenters), or potentially an odd number of nodes (eg with an arbiter).
> We could use a load balancer, but I guess this would strongly deviate from
> the second use case:
>
>
> In the second use case, the environment is not under our control, so we
> can only assume basic network connectivity from the application to the DB,
> and between the DBs (the latter potentially through an SSH tunnel if
> needed). In this use case, we can not assume a person to intervene if a
> node goes down, and would prefer some automated failover to the other node
> (this automation would also be welcome for the first use case, eg if
> something happens while nobody is watching). We can not assume eg a load
> balancer.
>
> There could be various ways how the environment in the second use case is
> set up, ranging from „application and database running on the same box“
> (well, no clustering for you then…), to dedicated two- or three node
> database cluster serving a number of application machines.
>
>
> In both use cases, we have full control over the application and the
> database code and environment.
>
> From reading various docs, it seems we would need something like Patroni
> (/Percona), at least for the first use case. However it seems relatively
> complex to set up and operate.
>
> I would appreciate your experience and input into which approach would
> best fit the two use cases. We are also willing to engage in paid
> consulting.
>
> Thanks,
> — Matthias
>
> --
> Matthias Leisi
> Katzenrütistrasse 68, 8153 Rümlang
> 
> Mobile +41 79 377 04 43
> matth...@leisi.net
>


Re: Trouble using pg_rewind to undo standby promotion

2024-11-07 Thread Torsten Förtsch
Your point of divergence is in the middle of the 7718/00BF file. So,
you should have 2 such files eventually, one on timeline 1 and the other on
timeline 2.

Are you archiving WAL on the promoted machine in a way that your
restore_command can find it? Check archive_command and archive_mode on the
promoted machine.

Also, do your archive/restore scripts work properly for history files?

On Wed, Nov 6, 2024 at 7:48 PM Craig McIlwee  wrote:

> I have a primary -> standby 1 -> standby 2 setup with all servers running
> PG 13.8 (this effort is part of getting on to a newer version, but I think
> those details aren't relevant to this problem).  The first standby uses
> streaming replication from the primary and the second standby is using a
> WAL archive with a restore_command.  To make this standby chain work,
> standby 1 is set to archive_mode = always with a command that populates the
> WAL archive.
>
> I would like to be able to promote standby 2 (hereon referred to just as
> 'standby'), perform some writes, then rewind it back to the point before
> promotion so it can become a standby again.  The documentation for
> pg_rewind says that this is supported and it seems like it should be
> straightforward, but I'm not having any luck getting this to work so I'm
> hoping someone can point out what I'm doing wrong.  Here's what I did:
>
> First, observe that WAL is properly being applied from the archive.  Note
> that we are currently on timeline 1.
>
> 2024-11-06 09:51:23.286 EST [5438] LOG:  restored log file
> "0001771100F9" from archive
> 2024-11-06 09:51:23.434 EST [5438] LOG:  restored log file
> "0001771100FA" from archive
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0001771100FB.gz: No such file or
> directory
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0002.history.gz: No such file or directory
>
> Next, stop postgres, set wal_log_hints = on as required by pg_rewind, and
> restart postgres.  I also make a copy of the data directory while the
> postgres is not running so I can repeat my test, which works fine on a
> small test database but won't be possible for the multi TB database that I
> will eventually be doing this on.
>
> Now promote the standby using "select pg_promote()" and see that it
> switches to a new timeline.  You can also see that the last WAL applied
> from the archive is 7718/BF.
>
> 2024-11-06 12:10:10.831 EST [4336] LOG:  restored log file
> "0001771800BD" from archive
> 2024-11-06 12:10:10.996 EST [4336] LOG:  restored log file
> "0001771800BE" from archive
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0001771800BF.gz: No such file or
> directory
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0002.history.gz: No such file or directory
> 2024-11-06 12:10:15.384 EST [4336] LOG:  restored log file
> "0001771800BF" from archive
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0001771800C0.gz: No such file or
> directory
> 2024-11-06 12:10:15.457 EST [4336] LOG:  received promote request
> 2024-11-06 12:10:15.457 EST [4336] LOG:  redo done at 7718/BF30
> 2024-11-06 12:10:15.457 EST [4336] LOG:  last completed transaction was at
> log time 2024-11-06 12:10:22.627074-05
> 2024-11-06 12:10:15.593 EST [4336] LOG:  restored log file
> "0001771800BF" from archive
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0002.history.gz: No such file or directory
> 2024-11-06 12:10:15.611 EST [4336] LOG:  selected new timeline ID: 2
> 2024-11-06 12:10:15.640 EST [4336] LOG:  archive recovery complete
> /data/wal_archive/restore_operational.sh: line 2:
> /data/wal_archive/operational/0001.history.gz: No such file or directory
> 2024-11-06 12:10:17.028 EST [4329] LOG:  database system is ready to
> accept connections
>
> Next, insert a record into just to make some changes that I want to
> rollback later.  (What I will eventually be doing is creating a publication
> to ship data to a newer version, but again, that's not what's important
> here.)
>
> Finally, shutdown postgres and attempt a rewind.  The address used in the
> --source-server connection string is the address of the primary.
>
> 2024-11-06 12:11:11.139 EST [4329] LOG:  received fast shutdown request
> 2024-11-06 12:11:11.143 EST [4329] LOG:  aborting any active transactions
> 2024-11-06 12:11:11.144 EST [4329] LOG:  background worker "logical
> replication launcher" (PID 5923) exited with exit code 1
> 2024-11-06 12:11:40.933 EST [4342] LOG:  shutting down
> 2024-11-06 12:11:41.753 EST [4329] LOG:  database system is shut down
>
> /usr/pgsql-13/bin/pg_rewind --target-pgdata=/data/pgsql/operational
> --source-server="host=x.x.x.x dbname=postgres user=xxx password=xxx"
> --dry-run --progress --re

Re: Trouble using pg_rewind to undo standby promotion

2024-11-07 Thread Craig McIlwee
On Thu, Nov 7, 2024 at 4:47 AM Torsten Förtsch 
wrote:

> Your point of divergence is in the middle of the 7718/00BF file. So,
> you should have 2 such files eventually, one on timeline 1 and the other on
> timeline 2.
>
> Are you archiving WAL on the promoted machine in a way that your
> restore_command can find it? Check archive_command and archive_mode on the
> promoted machine.
>

No, the promoted machine is not archiving.  How should that work?  Is it OK
for a log shipping standby that uses restore_command to also push to the
same directory with an archive_command or would that cause issues of trying
to read and write the same file simultaneously during WAL replay?  Or
should I be setting up an archive_command that pushes to a separate
directory and have a restore_command that knows to check both locations?

Hmm, as I write that out, I realize that I could use archive_mode = on
instead of archive_mode = always to avoid the potential for read/write
conflicts during WAL replay.  I can try this later and report back.

Also, do your archive/restore scripts work properly for history files?
>

The scripts don't do anything special with history files.  They are based
on the continuous archive docs [1] and this [2] article the with slight
modification to include a throttled scp since the log shipping server is
located in a different data center from the promoted standby and there is
limited bandwidth between the two.  (Also note that the archive script from
[2] is adapted to properly handle file transfer failures - the one in the
article will use the exit code of the rm command so postgres won't be
informed the file transfer fails resulting in missing WAL in the archive.)

Archive script:
---
#!/bin/bash

# $1 = %p
# $2 = %f

limit=10240 # 10Mbps

gzip < /var/lib/pgsql/13/data/$1 > /tmp/archive/$2.gz

scp -l $limit /tmp/archive/$2.gz postgres@x.x.x.x
:/data/wal_archive/operational/$2.gz
exit_code=$?

rm /tmp/archive/$2.gz

exit $exit_code
---

Restore script:
---
gunzip < /data/wal_archive/operational/$2.gz > $1
---

[1]
https://www.postgresql.org/docs/13/continuous-archiving.html#COMPRESSED-ARCHIVE-LOGS
[2]
https://www.rockdata.net/tutorial/admin-archive-command/#compressing-and-archiving

Craig

>


Re: Performance Issue with Hash Partition Query Execution in PostgreSQL 16

2024-11-07 Thread ravi k
Hi ,

Thanks for the suggestions.

Two more observations:

1) no sequence scan noticed from pg_stat_user_tables ( hope stats are
accurate in postgres 16) if parameter sniffing happens the possibility of
going to  sequence scan is more right.

2) no blockings or IO issue during the time.

3) even with limit clause if touch all partitions also it could have been
completed in milliseconds as this is just one record.

4) auto_explain in prod we cannot enable as this is expensive and with high
TPS we may face latency issues and lower environment this issue cannot be
reproduced,( this is happening out of Million one case)

This looks puzzle to us, just in case anyone experianced pls share your
experience.

Regards,
Ravi

On Thu, 7 Nov, 2024, 3:41 am David Mullineux,  wrote:

> It might be worth eliminating the use of cached plans here. Is your app
> using prepared statements at all?
> Point is that if the optimizer sees the same prepared query , 5 times,
> the  it locks the plan that it found at that time. This is a good trade off
> as it avoids costly planning-time for repetitive queries. But if you are
> manually querying, the  a custom plan will be generated  anew.
> A quick analyze of the table should reset the stats and invalidate any
> cached plans.
> This may not be your problem  just worth eliminating it from the list of
> potential causes.
>
> On Wed, 6 Nov 2024, 17:14 Ramakrishna m,  wrote:
>
>> Hi Team,
>>
>> One of the queries, which retrieves a single record from a table with 16
>> hash partitions, is taking more than 10 seconds to execute. In contrast,
>> when we run the same query manually, it completes within milliseconds. This
>> issue is causing exhaustion of the application pools. Do we have any bugs
>> in postgrs16 hash partitions? Please find the attached log, table, and
>> execution plan.
>>
>> size of the each partitions : 300GB
>> Index Size : 12GB
>>
>> Postgres Version : 16.x
>> Shared Buffers : 75 GB
>> Effective_cache :  175 GB
>> Work _mem : 4MB
>> Max_connections : 3000
>>
>> OS  : Ubuntu 22.04
>> Ram : 384 GB
>> CPU : 64
>>
>> Please let us know if you need any further information or if there are
>> additional details required.
>>
>>
>> Regards,
>> Ram.
>>
>