Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-18 Thread Ron Johnson
On Mon, Oct 13, 2025 at 4:47 AM Dominique Devienne 
wrote:
[snip]

> There's apparently no way to abstract the "transport" between libpq
> and the server, must be TCP (or *nix only socket files), cannot be an
> in-memory channel (for the embedded non-shared case),
>

I'd bet a nickel that local socket transfers all stay in memory.


> nor shared-memory (for the shared-case across localhost processes).
>

Shared memory means that I can stomp all over you, and you can't stop me.
That's the antithesis of ACID.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Does Java 8 support drivers 42.2.19 and 42.7.6?

2025-10-18 Thread Артем Романюк
Good afternoon!

Dear PostgreSQL community. Can I ask the developers if Java 8 officially
supports drivers 42.2.19 and 42.7.6? It is important that support is
provided fully and without problems.

Thank you!


Re: Alerting on memory use and instance crash

2025-10-18 Thread Adrian Klaver

On 10/8/25 11:58, sud wrote:

Thank you.
My understanding may be wrong here.And my apology as I am using the 
example of Oracle again even though these two are not the same. But 
being worked for a long time in Oracle so trying to understand exactly 
how it's different.


In oracle RAC(real application cluster) database, we have single 
databases with multiple nodes/instances/memory, which means the 
underlying storage is same but the memory/cpu of each of those instances 
are different and any of the instances can be down but the database 
still operates routing the application traffic of the downed node to 
others. Similarly even in AWS Aurora postgres also there can be multiple 
instances like Writer and Reader instances/nodes and the underlying 
storage being the same. So I was thinking of any such cluster level pg_* 
views available by querying which we would be able to know if any one of 
the nodes is down ?   Also , I don't see any such pg_* view which can 
show the statistics of all the instances combinely i.e. cluster level 
statistics.


Do you mean in normal Postgres it's alway a single instance/memory and 
single storage attached? then I also do not see any such cluster level 
views in aws aurora postgres too? Pardon if it's a silly one to ask.




It would be helpful if you specified exactly what variety of Postgres 
you are using and it's version.


If you are using AWS Aurora Postgres then you will need to look at pages 
like this:


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Managing.html

This list is for the community version of Postgres and it's been a long 
time since AWS saw fit to have someone on the list and when they where 
here they did not really provide answers.


--
Adrian Klaver
[email protected]




Re: How do I upsert depending on a second table?

2025-10-18 Thread Samuel Marks
the AS syntax can alternatively be used for aliases
https://www.postgresql.org/docs/current/sql-select.html

`SELECT actual_tablename table0 WHERE table0.column00 = 1`

(I used a space)

On Tue, Sep 23, 2025 at 3:52 PM Adrian Klaver  wrote:
>
> On 9/23/25 13:36, Samuel Marks wrote:
> > Attempt:
> > ```sql
> > CREATE TABLE org
> > (
> >  "name"  VARCHAR(50) PRIMARY KEY,
> >  owner   VARCHAR(50) NOT NULL
> > );
> >
> > CREATE TABLE repo
> > (
> >  "id"   INTEGER PRIMARY KEY,
> >  full_name  VARCHAR(255) UNIQUE NOT NULL,
> >  orgVARCHAR(50) NOT NULL REFERENCES org ("name")
> > );
> >
> > INSERT INTO org(name, owner) VALUES ('org0', 'user0');
> >
> > INSERT INTO repo (id, full_name, org)
> > VALUES (0, 'org0/name0 by wrong user', 'org0')
> > ON CONFLICT (full_name) DO UPDATE
> >  SET full_name = EXCLUDED.full_name,
> >  org   = EXCLUDED.org
> > WHERE EXISTS (SELECT 1
> >FROM org org_tbl
> >WHERE org_tbl.name = EXCLUDED.org
> >  AND org_tbl.owner = 'wrong user')
>
> Where is org_tbl?
>
> Or is this a copy and paste error?
>
> > RETURNING *;
> >
> > SELECT * FROM repo WHERE id = 0;
> > ```
> >
> > This all succeeds. It should fail because the 'wrong user' is trying
> > to create a new—or update an existing—repo.
> >
> > Thanks for all suggestions
> >
> >
>
>
> --
> Adrian Klaver
> [email protected]




Query on Patch and Upgrade History in PostgreSQL

2025-10-18 Thread loganathan P
Dear All,

How do I find the date and time of applied minor patches and upgrades in a
PostgreSQL database level?

Thanks.

Regards,
Loganathan P


Re: Pgbackrest changing RepoServer to new one

2025-10-18 Thread Ron Johnson
On Thu, Oct 9, 2025 at 3:16 AM KK CHN  wrote:

> List,
>
> I am in need of changing Repo Server of an existing  working pgbackrest
> setup(RHEL 9.3  , pgbackrest 2.52.1  database version 16)
>
> *I have googled,  got the information  that I need to do *
>
> 1. First shutdown the   Postgre Server cluster/instance. ( This is a
> production server, downtime permissions required if need to do so)
>
> 2  stop the stanza  and delete the stanza
>
> 3. Then recreate the stanza with the same name from the new  Repo Server.
>
> In need to change the RepoServer due to the storage limitations on the
> RepoServer forced to move to a large Repo Server.
>
> Can I bypass the step1 ( shutdown the postgreserver ?) and do step 2 and
> step 3,  will this action stalemate the entire DB server and backup setup.
>
> Request your valuable guidance and advice.
>

repo entries in pgbackrest.conf are numbered.  "Just" add repo2-
entries to your existing pgbackrest.conf, and then add "--repo=2" to your
existing pgbackrest commands.

(That *should* work.  I've never had to try it.)

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Direct Major Upgrade (13.7 → 17.6) with pg_upgrade: Clarifying the need for Intermediate Minor Fixes/Scripts

2025-10-18 Thread Vu Le (JData - HN)
Hi pgsql-general community,

I am planning a significant major version upgrade for a large-scale
PostgreSQL production environment, currently running PostgreSQL 13.7.
Our target is the latest major version, 17.6, utilizing the pg_upgrade
tool.

My core question revolves around the recommended Extra Works/Manual
Fixes from the intermediate minor and major versions.

*** THE PROBLEM / MY CONCERN

I've been reviewing various minor release notes and community advice.
It is common to see specific instructions for manual steps in certain
minor versions, such as:

 - REINDEX requirements for specific index types.

 - DROP/CREATE operations.

 - Executing specialized SQL scripts (e.g., fix-CVE-2024-4317.sql in
some intermediate versions) to address data corruption or security
fixes that cannot be automatically handled by standard upgrades.

My question is: When performing a direct jump from 13.7 to 17.6 using
pg_upgrade, do I need to manually identify and execute the cumulative
extra works/fixes from ALL the skipped intermediate versions (13.8,
13.9, 14.x, 15.x, 16.x)?

*** STEPS TAKEN SO FAR

 - Environment: Production (High Traffic, Large Data Volume).

 - Method: Planning to use pg_upgrade.

 - Testing: I have successfully performed the direct upgrade (13.7 ->
17.6) in our UAT/Dev environment. Functionality appears normal, but my
customer's self-built UAT/Dev environment is known not to perfectly
replicate 100% of our Production behavior, hence my caution.

*** ASK FOR BEST PRACTICES

Is there an official best practice document that clarifies how
pg_upgrade handles these "extra works" from skipped minor releases?

Does pg_upgrade implicitly incorporate all necessary structural fixes
from the minor versions between 13.7 and 17.6?

Has anyone in the community performed a similar large jump and found
unexpected issues due to skipping these intermediate manual steps?

Any guidance on this significant upgrade path would be highly appreciated.

Thank you in advance for your time and expertise.

-- 
Best Regards,

Miles Le (Mr.)




Re: Alerting on memory use and instance crash

2025-10-18 Thread sud
Thank you.
My understanding may be wrong here.And my apology as I am using the example
of Oracle again even though these two are not the same. But being worked
for a long time in Oracle so trying to understand exactly how it's
different.

In oracle RAC(real application cluster) database, we have single databases
with multiple nodes/instances/memory, which means the underlying storage is
same but the memory/cpu of each of those instances are different and any of
the instances can be down but the database still operates routing the
application traffic of the downed node to others. Similarly even in AWS
Aurora postgres also there can be multiple instances like Writer and Reader
instances/nodes and the underlying storage being the same. So I was
thinking of any such cluster level pg_* views available by querying which
we would be able to know if any one of the nodes is down ?   Also , I don't
see any such pg_* view which can show the statistics of all the instances
combinely i.e. cluster level statistics.

Do you mean in normal Postgres it's alway a single instance/memory and
single storage attached? then I also do not see any such cluster level
views in aws aurora postgres too? Pardon if it's a silly one to ask.



On Wed, Oct 8, 2025 at 9:52 PM Adrian Klaver 
wrote:

> On 10/8/25 08:42, sud wrote:
> > Hi Experts,
> >
> > It's postgres version 16. I have two questions on alerting as below.
> >
> > 1)If we want to have alerting on any node/instance that gets crashed :-
> > In other databases like Oracle the catalog Views like "GV$Instance" used
> > to give information on whether the instances are currently active/down
> > or not. But in postgres it seems all the pg_* views are instance
> > specific and are not showing information on the global/cluster level but
> > are restricted to instance level only. So is there any other way to
> > query the pg_* views to have alerts on the specific instance crash?
>
> 1) When you say instance do you mean database?
>
> 2) Not all system tables/views are database only.
>
> For instance:
> https://www.postgresql.org/docs/current/catalog-pg-database.html
> https://www.postgresql.org/docs/current/catalog-pg-auth-members.html
> https://www.postgresql.org/docs/current/catalog-pg-authid.html
> https://www.postgresql.org/docs/current/view-pg-roles.html
>
>
> > 2)Is there a way to fetch the data from pg_* view to highlight the
> > specific connection/session/sqls which is using high memory in postgres?
> >
> > Appreciate your guidance.
> >
> > Regards
> > Sud
>
>
> --
> Adrian Klaver
> [email protected]
>


Re: No POSTGIS in PostgreSQL 18

2025-10-18 Thread Adrian Klaver

On 9/28/25 23:22, Mark Idiong wrote:

Dear Webmaster,

I am learning SQL using PostgreSQL 18, just released. And I realized 
there is no POSTGIS support.


Yes there is
https://postgis.net/

"This version requires PostgreSQL 12 - 18beta3, ..."



Request guidance on how to activate this function as PostGIS is missing 
from the Stack Builder list of applications.


This is where I mention you should include in this sort of report the 
following:


1) OS and version.

2) Source of package.

Given you mention Stack Builder, I'm betting you used the EDB installer. 
What EDB chooses to include is up to them, you will need to touch base 
with them.


A good place to start is:

https://github.com/EnterpriseDB/edb-installers/issues



Many thanks,
Mark.



--
Adrian Klaver
[email protected]




Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-18 Thread Adrian Klaver

On 10/1/25 02:33, Ashish Mukherjee wrote:
I think the conclusion is to do a more thorough testing before the 
upgrade next time. Have updated our playbook for upgrades to include 
more thorough testing.




/I upgraded multiple non TDE databases from v12 to v17 and they are all 
fine./


Then raise an issue here:

https://forums.percona.com/c/postgresql/pg-tde-transparent-data-encryption-tde/82

Seems to me finding the cause and a possible fix would build on the 
effort you have already put into using TDE, rather then learning an 
entirely new system and having it possibly fail somewhere else.




--
Adrian Klaver
[email protected]




Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-18 Thread Tom Lane
Rob Sargent  writes:
> On 10/12/25 16:10, David Barsky wrote:
>>> Postgres is not an embedded database, if you want that experience then
>>> use a database that is designed to be embedded.

>> That's fair, especially from an operational standpoint. However, I _think_
>> Postgres can get really close to an embedded database's development 
>> experience
>> by doing a few tricks that I'll elaborate on later on in this email.

> Were I a betting man, I would bet heavily against this community, which 
> prides itself on NOT losing data, allowing an option that would do just 
> that.

Well, mumble ... we have any number of options that can be read that
way.  One obvious one is that we don't try to prevent you from putting
$PGDATA on a RAM disk.  Turning off fsync is another popular way to
trade away durability for speed.

But I concur with the point that we're not here to pretend to be an
embedded database, as there are other projects that do that better
(for example, our good friends at SQLite).

The advice I'd give the OP is to take a look at our TAP-test
infrastructure.  We've put a good deal of effort, and are continuing
to do so, into letting those tests spin up transitory testing
databases pretty cheaply.

regards, tom lane




Alerting on memory use and instance crash

2025-10-18 Thread sud
Hi Experts,

It's postgres version 16. I have two questions on alerting as below.

1)If we want to have alerting on any node/instance that gets crashed :- In
other databases like Oracle the catalog Views like "GV$Instance" used to
give information on whether the instances are currently active/down or not.
But in postgres it seems all the pg_* views are instance specific and are
not showing information on the global/cluster level but are restricted to
instance level only. So is there any other way to query the pg_* views to
have alerts on the specific instance crash?
2)Is there a way to fetch the data from pg_* view to highlight the specific
connection/session/sqls which is using high memory in postgres?

Appreciate your guidance.

Regards
Sud


Re: Postgre and AIO

2025-10-18 Thread Dominique Devienne
On Mon, Sep 29, 2025 at 2:07 PM Weck, Luis  wrote:
> Now that AIO landed in v18,
> [...] index updating happens one at a time.
> [...] it took a long time sequentially reading the index for each value

These have nothing to do with Async-IO IMHO.

For your first case, each index is its own file, so could already be
processed in parallel, in terms of IO.

For your second case, and in general, this is about the SQL processing
engine doing more in parallel. AFAIK, because of the current (mostly?)
single-threaded-process-based architecture, parallel processing means
forking worker processes, and that's not done too often.

AIO is not a magic bullet. It just means NOT blocking waiting for IO,
so you have the opportunity to do something else during that time. But
your code must be structured to be able to do something else already.
Which is not free, and tends to make the code more complex. In
single-threaded apps, like PostgreSQL backends, that means an event
loop, and while I'm sure there's one, for the client-backend socket
IO, I'm not sure the rest of the code is loop-based.

But these are comments from someone who doesn't know the code, so feel
free to ignore :). --DD




Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-18 Thread Dominique Devienne
On Mon, Oct 13, 2025 at 5:08 PM Ron Johnson  wrote:
> On Mon, Oct 13, 2025 at 4:47 AM Dominique Devienne  
> wrote:
> [snip]
>> There's apparently no way to abstract the "transport" between libpq
>> and the server, must be TCP (or *nix only socket files), cannot be an
>> in-memory channel (for the embedded non-shared case),
> I'd bet a nickel that local socket transfers all stay in memory.

But that's not cross-platform... Needs to work the same on Windows.

>> nor shared-memory (for the shared-case across localhost processes).
> Shared memory means that I can stomp all over you, and you can't stop me.  
> That's the antithesis of ACID.

SHM is how SQLite in WAL mode coordinates access to the same DB from
several connections. So if it's good enough for SQLite, I don't see
what it would be wrong for PostgreSQL too. SQLite is also ACID.




Clarification on restart_lsn behavior in logical decoding

2025-10-18 Thread Rajendra Kumar Dangwal
Hi PG users.

We are using PostgreSQL’s logical decoding feature with Debezium to capture
CDC events and stream them to Kafka. To verify whether the required
position to resume from after a connector restart is still available on the
server, we compare the processed LSNs with the restart_lsn of the
replication slot.

>From the documentation, our understanding is that restart_lsn represents
the oldest WAL position still required by the logical decoding consumer:

> restart_lsn pg_lsn
> The address (LSN) of oldest WAL which still might be required by the
> consumer of this slot and thus won't be automatically removed during
> checkpoints unless this LSN gets behind more than max_slot_wal_keep_size
> 
>  from
> the current LSN. NULL if the LSN of this slot has never been reserved.


However, there is some confusion around the possible value of restart_lsn.
My current understanding is that the restart_lsn cannot be greater than the
start LSN of the oldest open (non-consumed) transaction.
One of my teammates, however, suggested that there might be cases where the
restart_lsn could advance beyond the start LSN of an unconsumed
transaction, as illustrated below:

   LSN
T1 BEGIN
T1 CHANGE 120
T1 CHANGE 221  <--- Restart LSN
T1 CHANGE 322
T1 COMMIT  30  <--- Confirmed flush LSN
T2 BEGIN
T2 CHANGE 110  <--- Start LSN of unconsumed transaction
T2 CHANGE 211  <--- Last processed LSN
T2 CHANGE 312
T2 COMMIT  40

Could someone please clarify whether the restart_lsn can indeed have
value greater than the start LSN of an unconsumed transaction, or if
it is always expected to be less than or equal to it?


Thanks,

Rajendra.


Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-18 Thread Laurenz Albe
On Tue, 2025-09-30 at 13:53 +0530, Ashish Mukherjee wrote:
> Now the consideration is to use some other encryption option for the
> database which will work fine on pgsql 17. Cybertec's technology is
> one route, the other is EDB. I am happy to hear experiences of folks
> here with pgsql encryption options for v17 on large databases
> (2.5T in our case).

I will refrain from making a recommendation, but you should know that
data-at-rest encryption will always incur a certain performance penalty.

Whatever solution you choose, you should run performance tests.

Yours,
Laurenz Albe




Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-18 Thread Ashish Mukherjee
Thank you all for your inputs.

Well, Percona TDE was leading to the queries being very inefficient / slow
after upgrading to pgsql 17. Explain analyze shows that query planning time
shoots up crazily. A decision was taken to go back to pgsql 12, which
worked out fine as there was no incompatibility. I restored from the binary
dump with the -j option, as our database is huge. I completely agree that
downgrade is not a good option but a pragmatic one under the circumstances.

Now the consideration is to use some other encryption option for the
database which will work fine on pgsql 17. Cybertec's technology is one
route, the other is EDB. I am happy to hear experiences of folks here with
pgsql encryption options for v17 on large databases (2.5T in our case).

On Mon, Sep 29, 2025 at 5:10 AM Merlin Moncure  wrote:

> On Fri, Sep 26, 2025 at 8:16 AM Ashish Mukherjee <
> [email protected]> wrote:
>
>> Hello,
>>
>> I have a strange requirement to downgrade from pgsql 17 to pgsql 12. This
>> is because we found in production certain incompatibilities between both
>> versions for our database. It should have been caught in testing but was
>> not.
>>
>
> Agree with others that snap downgrade is not necessarily a good choice
> here.  Either way, if I were in your shoes, I'd be loading a plain text
> dump, maybe with some light massaging to strip out some compatibility
> issues.
>
> Can you let us know what the hang up is?  Version upgrades these days are
> usually pretty painless except for some performance issues, unless you have
> some unusual situations, for example, exotic extensions.
>
> merlin
>
>


Pgbackrest changing RepoServer to new one

2025-10-18 Thread KK CHN
List,

I am in need of changing Repo Server of an existing  working pgbackrest
setup(RHEL 9.3  , pgbackrest 2.52.1  database version 16)

*I have googled,  got the information  that I need to do *

1. First shutdown the   Postgre Server cluster/instance. ( This is a
production server, downtime permissions required if need to do so)

2  stop the stanza  and delete the stanza

3. Then recreate the stanza with the same name from the new  Repo Server.

In need to change the RepoServer due to the storage limitations on the
RepoServer forced to move to a large Repo Server.

Can I bypass the step1 ( shutdown the postgreserver ?) and do step 2 and
step 3,  will this action stalemate the entire DB server and backup setup.

Request your valuable guidance and advice.

Thank you,
Krishane


Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-18 Thread Ron Johnson
On Thu, Oct 16, 2025 at 6:22 PM Tom Lane  wrote:

> Greg Sabino Mullane  writes:
> > On Mon, Oct 13, 2025 at 3:19 PM David Barsky  wrote:
> >> Anyways, I'll try to get at what motivated this whole discussion: would
> >> there be community opposition to adding a CLI flag that'd exit/shutdown
> all
> >> Postgres processes once all pending connections close?
>
> > I don't know about opposition, per se, but the onus is on you to provide
> a
> > strong use case not already covered by existing tools.
>
> In particular, it's not terribly clear why the existing "smart"
> shutdown mode isn't sufficient.
>

I think OP is saying "we don't run the tests from a shell prompt, so can't
run pg_ctl stop".

I think he might be able to, though, by having his IDE run a PowerShell
command that runs "pg_ctl stop -wt", and also create an error trap that
also runs that PS command.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-18 Thread David Barsky
> If testing is all scripted, then why not put "pg_ctl stop" at the end of
the script?

Sorry for the delay.

It’s _mostly_ scripted, but two major reasons:

1. If that script is cancelled or interrupted for any reason, it’s possible
that
   `pg_ctl stop` won't be called and I'd have a leaked process. I could
mitigate
   this by calling `pg_ctl stop` at the *start* of the script, but that
adds a
   bit of latency I'd prefer to avoid.
2. It's a pain to hook that script up to our IDEs in a semi-centralized
manner
   (I extended rust-analyzer's test runner to support non-standard build
systems
   and I never really got non-standard build systems working with
debuggers).
   Even if we eat the pain, the aforementioned latency coming from `pg_ctl
stop`
   is a bit annoying.
   1. For context, rust-analyzer has a nice "runnables" feature that makes
it
  possible to run a test directly from within an IDE. Other language
  servers/IDEs have similar functionality, but they're heavily biased
to use
  language-idiomatic tools and make it a pain to override build
  tools/runnables. Besides, in my experience working on IDEs for a large
  tech company (and collecting a _lot_ of logs/telemetry...), it's
  _extremely_ rare for people to configure their editors: they're
  overwhelmingly stock.

I'm also of the mind that there's some elegance to `pg_ctl start
--exit-mode=smart`: no matter how the test script is interrupted or
cancelled, I
can run it again and guarantee that there will be *no* leaked processes or
non-idempotency because the postmaster handles it. That said, looking over
the
`postmaster.c` source, it seems like there isn't any bookkeeping of children
starting/exiting, so there isn't really any reference counting of
connections
there. I'd be happy to add it, but I _do not_ trust myself to write correct
C!

Anyways, I think this relatively small tweak can make a pretty meaningful
impact
in the end-developer experience of programming against Postgres, especially
in
setups that would like to minimize cross-language scripting/dependencies.
However, I also understand that this isn't exactly how Postgres might be
commonly used on these mailing lists.

—David

On Oct 13, 2025 at 1:27:17 PM, Ron Johnson  wrote:

> On Mon, Oct 13, 2025 at 3:19 PM David Barsky  wrote:
> [snip]
>
>> Anyways, I'll try to get at what motivated this whole discussion: would
>> there be
>> community opposition to adding a CLI flag that'd exit/shutdown all
>> Postgres
>> processes once all pending connections close? E.g., something similar to
>> SQL
>> Server's `auto_close` in the vein of `postgres
>> -c "auto_close_after=100"` or `pg-ctl start --exit-mode=smart`?
>>
>
> If testing is all scripted, then why not put "pg_ctl stop" at the end of
> the script?
>
> --
> Death to , and butter sauce.
> Don't boil me, I'm still alive.
>  lobster!
>


Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-18 Thread Ashish Mukherjee
I think the conclusion is to do a more thorough testing before the upgrade
next time. Have updated our playbook for upgrades to include more thorough
testing.

On Tue, Sep 30, 2025 at 8:17 PM Adrian Klaver 
wrote:

> On 9/30/25 01:23, Ashish Mukherjee wrote:
> > Thank you all for your inputs.
> >
> > Well, Percona TDE was leading to the queries being very inefficient /
> > slow after upgrading to pgsql 17. Explain analyze shows that query
> > planning time shoots up crazily. A decision was taken to go back to
>
> How did you determine that Percona TDE was the issue vs a 5 version jump
> in Postgres?
>

*I upgraded multiple non TDE databases from v12 to v17 and they are all
fine.*

>
>
> > Now the consideration is to use some other encryption option for the
> > database which will work fine on pgsql 17. Cybertec's technology is one
> > route, the other is EDB. I am happy to hear experiences of folks here
> > with pgsql encryption options for v17 on large databases (2.5T in our
> case).
>
> Personally I would verify first that you are not hitting some more
> general issue with the 5 years of changes in Postgres since the last
> release of 12 and current release of 17.
>
> >
> > On Mon, Sep 29, 2025 at 5:10 AM Merlin Moncure  > > wrote:
>
>
>
> --
> Adrian Klaver
> [email protected]
>


Re: Does Java 8 support drivers 42.2.19 and 42.7.6?

2025-10-18 Thread Vladimir Sitnikov
>Can I ask the developers if Java 8 officially supports drivers 42.2.19 and
42.7.6? It is important that support is provided fully and without problems.

Both pgjdbc 42.2.19 and pgjdbc 42.7.6 should work fine with Java 8.
The set of known issues differs between the versions.

42.2.19 was tested with Java 6, 7, 8, and 11:
https://github.com/pgjdbc/pgjdbc/blob/REL42.2.19/.travis.yml#L222
https://github.com/pgjdbc/pgjdbc/blob/REL42.2.19/.travis.yml#L210
https://github.com/pgjdbc/pgjdbc/blob/REL42.2.19/.github/workflows/main.yml#L41

42.7.6 was tested with Java 8, 11, 17, 21, and 24:
https://github.com/pgjdbc/pgjdbc/blob/REL42.7.6/.github/workflows/matrix.mjs#L45-L48

Vladimir


Index rebuilding strategy

2025-10-18 Thread Siraj G
Hello Experts!

What are the top pointers we should consider for index rebuild? Check its
size,  bloat estimate, heavy Updates/Deletes?

Please highlight the best practices.

Thanks
Siraj


Re: Database in another drive

2025-10-18 Thread Tim Gerber
Yes, you can either set the default tablespace to the desired drive or
create a tablespace on the E: drive and specify the db objects to use it.
Best to look at:
https://www.postgresql.org/docs/current/manage-ag-tablespaces.html

Best Regards,
Tim

On Mon, Oct 13, 2025 at 3:19 PM Arbol One  wrote:

> Is there a way to either create the database or save the data in another
> drive?
>
> I am using Win11, not my choice since 1998, and, not surprisingly, the
> WinOS had a crash; I lost all kinds of information including PostgreSQL 
> (postgres
> (PostgreSQL) 18.0), its databases, tables, data, etc., what a nightmare!
> To not have this happen again, I'd like to create the databases in the E:
> drive or have the tables stored in that drive, please note that I am not
> talking about backing the data in the E: drive, but to actually have the
> databases in the E: drive. Can this be done and is it practical to do this?
>
>
>
>
>
>
>


Re: Are SQL-language function lock ordering deadlocks solved in PostgreSQL 18?

2025-10-18 Thread Merlin Moncure
On Mon, Oct 6, 2025 at 10:54 AM Bernice Southey 
wrote:

> Hi pgsql-general community,
>
> I tried the simple test from the original mail [1] and indeed got
> deadlocks in version 17, but not 18. Yet absence of evidence is not
> proof of no race conditions.
>
> This might be my favourite change in 18 [2], if I'm understanding it
> right. I much prefer the install-time checking of SQL-language
> functions, and the elegance of pure SQL. But I've been convinced the
> performance benefit of plan caching gave PL/pgSQL the edge. Now I'm
> not so sure.
>

It sure seems like the intent of the change was to modify exactly that
behavior.

If you look at the major built-in sql-esque languages, sql(old), sql(new),
PSM (new standard syntax) and pl/pgsql, I guess psm style and pure SQL
might be overlapped with PSM style functions, especially if PSM functions
can be inlined (I haven't looked yet)-- inlining of functions such as
'SELECT $1 + $2' being the only reason I write sql functions these days.
 I have always found the strict compile time checking to be highly
obnoxious except in the immutable/inlinable cases that come up now and
then.

Plan caching is not so important IMO, since execution time tends to greatly
exceed plan time in my experience and the real performance bugaboo is bad,
not uncached plans.   FIxing bad plans typically involves dipping into the
pl/pgsql toolbox, using variable temp tables, planner directives, and other
forbidden tools in PSM.  In simple terms, I also used to restrict server
side coding  views and pure sql functions for abstraction, but have long
since given up, and mainly write procedures :-).

I really wish PSM functions could do what pl/pgsql functions do, since I
like the creation syntax vs the 'string is body of code',but, alas.

merlin


Re: Enquiry about TDE with PgSQL

2025-10-18 Thread Laurenz Albe
On Fri, 2025-10-17 at 00:49 -0400, Ron Johnson wrote:
> On Thu, Oct 16, 2025 at 6:05 PM Greg Sabino Mullane  
> wrote:
> > 
> > TDE, on the other hand, is a very complex and difficult thing to add into 
> > Postgres.
> 
> TDE was added to SQL Server, with (to us, at least) minimally-noticed 
> overhead.
> Oracle has it, too, but I don't know the details.
> 
> The bottom line is that requirements for TDE are escalating, whether you like 
> it or
> not, as Yet Another Layer Of Defense against hackers exfiltrating data, and 
> then
> threatening to leak it to the public.

Bruce Momjian has interesting things to say about that in
https://compiledconversations.com/6/ (unfortunately I don't remember where
exactly in this 84 minute piece).

It is a feature that users want (or need to comply with whatever they feel
they have to comply with).  On the other hand, it has very limited technical
or security value, which hampers its acceptance into core.

Yours,
Laurenz Albe




Re: pgpass file in postresql.auto.conf?

2025-10-18 Thread Laurenz Albe
On Fri, 2025-09-26 at 12:05 +, Dan Mahoney (Gushi) wrote:
> In the interest of automation, I've set up a pgpass file for my 
> pg_basebackup between master and standby.  This all works, thusly:
> 
> pg_basebackup -d 
> 'postgres://[email protected]:5432/foo?sslmode=verify-ca' -F p 
> --wal-method=stream -P -R -D /var/db/postgres/data17-test3
> 
> However, instead of the password getting baked into the pgsql.auto.conf, 
> the reference to the passfile gets put in, instead:
> 
> # Do not edit this file manually!
> # It will be overwritten by the ALTER SYSTEM command.
> primary_conninfo = 'user=repuser passfile=''/var/db/postgres/.pgpass'' 
> channel_binding=prefer host=10.1.1.1 port=5432 sslmode=''verify-ca'' 
> sslnegotiation=postgres sslcompression=0 sslcertmode=allow sslsni=1 
> ssl_min_protocol_version=TLSv1.2 gssencmode=disable krbsrvname=postgres 
> gssdelegation=0 target_session_attrs=any load_balance_hosts=disable 
> dbname=foo'

That happens when "pg_basebackup" used a password file to connect to
the PostgreSQL server.

> But it seems postgres won't actually read the passfile.

Oh yes, it will, as long as it has permissions 0600, 0400 or 0700 and
belongs to the database server OS user (commonly "postgres").
It must have worked for the "pg_basebackup", so PostgreSQL assumes it
will also work for replication.

> Sep 26 12:01:27 hostname postgres[42455]: [7-1] 2025-09-26 12:01:27.658 
> UTC [42455] FATAL:  could not connect to the primary server: connection to 
> server at "10.1.1.1", port 5432 failed: fe_sendauth: no password supplied
> 
> Am I doing something wrong here?

That is hard to say.  You should have run "pg_basebackup" as the
same OS user that starts the standby.

> I'm loathe to hand-edit the file, because of that warning there.

Makes sense, although it is OK as long as you don't mess up the file.


> Is there an alter system command that can be used to properly populate the 
> password into this file?

Sure.  If the standby server is up and running (even if it cannot connect
to the primary), you can connect and execute

  ALTER SYSTEM SET primary_conninfo = 'password=''my secret password''';

Yours,
Laurenz Albe




Re: Upgrade & Rollback plan: My customer requests rollback via old-version standby (13 ↔ 17) — need community advice

2025-10-18 Thread Laurenz Albe
On Fri, 2025-10-10 at 15:26 +0700, Vu Le (JData - HN) wrote:
> I'm currently planning a major version upgrade from PostgreSQL 13.x to
> 17.x in a production environment.
> 
> My customer has requested the following rollback approach, and I’d
> like to confirm if it’s technically feasible or advisable before
> proceeding.
> 
> Scenario:
> 1. They have a **Primary–Standby setup** (streaming replication).
> 2. Their idea is to **upgrade only the Primary** (to v17) first, while
> keeping the **Standby** on v13 (the old version).
>- The upgraded Primary will run read/write traffic for about a week
> to validate stability.
>- If any serious issue occurs, the plan is to **switch over**
> (promote the v13 Standby), adjust IPs, and resume operations there —
> minimizing downtime.
> 3. They also asked whether it’s possible for **data generated on the
> v17 Primary** to still be **replicated back to the v13 Standby**, so
> that rollback would be fast and without data loss.
> 
> Constraints:
> - They **cannot use a Blue/Green or clone-based approach**, because of
> **limited storage resources**.
> - They also doesn’t want the old data directory to become outdated
> (they expects it could stay in sync with the upgraded node).
> - They only have **UAT and Production environments** (no dedicated Staging).
> 
> Questions:
> 1. Is there **any supported or practical method** to replicate data
> *backward* (from PostgreSQL 17 to 13) — even temporarily, for rollback
> purposes?
> 2. If not, what are the **recommended real-world rollback strategies**
> for a low-downtime upgrade under these constraints?
> 3. Are there open-source tools or logical replication setups (e.g.,
> pglogical, Bucardo, etc.) that could safely achieve something similar?

The only way to achieve something like that is to use logical replication.
You'd have to switch from streaming replication to logical replication:

- create a publication for all tables on the primary
- turn off the application
- promote the standby server
- create a subscription on the former standby with "copy_data = off"

Then you can upgrade the former primary with pg_upgrade --link and
restart the application.

After that, logical replication will keep the v13 machine updated.

Note that you cannot run any DDL statements on the database after that,
else replication will break.

You cannot upgrade the standby server, you'll have to discard the data
directory and start with a new pg_basebackup.

This is all pretty complicated and should be tested well.
But then, it might be a better idea to invest the testing effort into
testing the application on PostgreSQL v17, so that you are confident
that you won't need to downgrade.  That would allow you to use a simpler
and less risky form of upgrade.

Yours,
Laurenz Albe




Re: Upgrade & Rollback plan: My customer requests rollback via old-version standby (13 ↔ 17) — need community advice

2025-10-18 Thread Vu Le (JData - HN)
Dear Bruce and Laurenz,

Thank you very much for taking the time to share your valuable advice with me.

The information you provided was highly useful, and I appreciate the
specific guidance.

I intend to dedicate time to research your suggestions further.

I look forward to applying this knowledge. Thank you again for your
time and expertise.

Best regards,
Miles Le | Vu (Mr.)

On Sun, Oct 12, 2025 at 1:50 AM Bruce Momjian  wrote:
>
> On Sat, Oct 11, 2025 at 08:16:42PM +0200, Laurenz Albe wrote:
> > On Sat, 2025-10-11 at 23:02 +0700, Vu Le (JData - HN) wrote:
> > > If possible, could you please share any additional best practices or
> > > important considerations apart from testing the new version in a
> > > staging environment?
> >
> > All I can think of is:
> >
> > - Always keep a backup around.
> >
> > - Read the "Migration to version x" section of the release notes of v14,
> >   v15, v16 and v17 before you test the application.  That will give you
> >   ideas what areas to test particularly well.
> >
> > - Test your administrative procedures too.  For example, v15 removed
> >   the exclusive online file system backup.
>
> Agreed, and this might help for reviewing the release notes:
>
> https://momjian.us/main/blogs/pgblog/2022.html#June_13_2022
>
> --
>   Bruce Momjian  https://momjian.us
>   EDB  https://enterprisedb.com
>
>   Do not let urgent matters crowd out time for investment in the future.




Database in another drive

2025-10-18 Thread Arbol One
Is there a way to either create the database or save the data in another drive?

I am using Win11, not my choice since 1998, and, not surprisingly, the WinOS 
had a crash; I lost all kinds of information including PostgreSQL (postgres 
(PostgreSQL) 18.0), its databases, tables, data, etc., what a nightmare!
To not have this happen again, I'd like to create the databases in the E: drive 
or have the tables stored in that drive, please note that I am not talking 
about backing the data in the E: drive, but to actually have the databases in 
the E: drive. Can this be done and is it practical to do this?








Re: Alerting on memory use and instance crash

2025-10-18 Thread veem v
My 2cents:-
In regards to the memory consumption question of OP:- Wouldn't the column
"temp_blks_read" and "temp_blks_written" in pg_stats_statements provide
details around the memory consumption i.e. when the query exceeds the
work_mem then it tries occupying the temp blocks. Something as below.
Correct me if I'm wrong.

WITH block_size AS (
  SELECT setting::int AS size FROM pg_settings WHERE name = 'block_size'
)
SELECT
  query,
  calls,
  pg_size_pretty(temp_blks_read * bs.size) AS temp_read_in_bytes,
  pg_size_pretty(temp_blks_written * bs.size) AS temp_written_in_bytes
FROM pg_stat_statements, block_size bs
WHERE temp_blks_read > 0 OR temp_blks_written > 0
ORDER BY temp_blks_written DESC
LIMIT 10;

On Thu, 9 Oct 2025 at 01:24, Adrian Klaver 
wrote:

> On 10/8/25 11:58, sud wrote:
> > Thank you.
> > My understanding may be wrong here.And my apology as I am using the
> > example of Oracle again even though these two are not the same. But
> > being worked for a long time in Oracle so trying to understand exactly
> > how it's different.
> >
> > In oracle RAC(real application cluster) database, we have single
> > databases with multiple nodes/instances/memory, which means the
> > underlying storage is same but the memory/cpu of each of those instances
> > are different and any of the instances can be down but the database
> > still operates routing the application traffic of the downed node to
> > others. Similarly even in AWS Aurora postgres also there can be multiple
> > instances like Writer and Reader instances/nodes and the underlying
> > storage being the same. So I was thinking of any such cluster level pg_*
> > views available by querying which we would be able to know if any one of
> > the nodes is down ?   Also , I don't see any such pg_* view which can
> > show the statistics of all the instances combinely i.e. cluster level
> > statistics.
> >
> > Do you mean in normal Postgres it's alway a single instance/memory and
> > single storage attached? then I also do not see any such cluster level
> > views in aws aurora postgres too? Pardon if it's a silly one to ask.
> >
>
> It would be helpful if you specified exactly what variety of Postgres
> you are using and it's version.
>
> If you are using AWS Aurora Postgres then you will need to look at pages
> like this:
>
>
> https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Managing.html
>
> This list is for the community version of Postgres and it's been a long
> time since AWS saw fit to have someone on the list and when they where
> here they did not really provide answers.
>
> --
> Adrian Klaver
> [email protected]
>
>
>


Re: pg_hint_tables

2025-10-18 Thread Rob Sargent
I am top posting because I believe you have “hikacked” a thread. 

Rather than replying to an existing thread unrelated to your topic you should 
start your own specific thread. 


> On Sep 30, 2025, at 2:13 PM, Adrian Klaver  wrote:
> 
> On 9/30/25 12:51, Wong, Kam Fook (TR Technology) wrote:
>> I am trying to use pg_hint_tables in AWS Aurora Postgres.  I changed the 
>> Postgres parameter "pg_hint_plan.enable_hint_table = 1".  Then I logout but 
>> can't login anymore with the following error message.
> 
> To be clear it is pg_hint_plan not pg_hint_tables?
> 
>> “postgres@postgres:[2719]:ERROR:  relation "hint_plan.hints" does not exist 
>> at character 21”.
>> I undo the changes, log back in.  Can't find a good resources that lead me 
>> to the root cause.  So I try a few other things, see below.  But no luck.  
>> Any guidance or help is greatly appreciated.
> 
> From here:
> 
> https://github.com/ossc-db/pg_hint_plan/blob/master/docs/installation.md
> 
> Did you do?:
> 
> "
> Loading pg_hint_plan
> 
> pg_hint_plan does not require CREATE EXTENSION. Loading it with a LOAD 
> command will activate it and of course you can load it globally by setting 
> shared_preload_libraries in postgresql.conf. Or you might be interested in 
> ALTER USER SET/ALTER DATABASE SET for automatic loading in specific sessions.
> 
> postgres=# LOAD 'pg_hint_plan';
> LOAD
> "
>> ALTER EXTENSION pg_hint_plan  UPDATE TO '1.4';
>> GRANT SELECT, INSERT, UPDATE, DELETE ON hint_plan.hints TO PUBLIC;
>> GRANT USAGE ON SCHEMA hint_plan TO PUBLIC;
>> Thank you
>> Kam Fook Wong
> --
> Adrian Klaver
> [email protected]
> 
> 




pg_running_stats - mergeable running statistics (Welford/Chan) extension for postgresql

2025-10-18 Thread Chanukya SDS
Hi all,

I’d like to share a new PostgreSQL extension called pg_running_stats.

It implements mergeable, numerically stable running statistics using the
Welford and Chan algorithms.
Unlike the built-in aggregates such as avg(), variance(), and stddev(),
which require scanning the entire dataset, pg_running_stats maintains a
compact internal state that can be updated or merged incrementally.

This makes it well-suited for:

   1. streaming or real-time analytics where data arrives continuously,
   2. incremental computation over large tables,
   3. parallel or distributed queries that need to merge partial aggregates
   efficiently.


The extension computes:
mean, variance, standard deviation, skewness, kurtosis, and min/max all in
a single pass.

It’s written entirely in C, depends only on PostgreSQL headers, and builds
cleanly on macOS (Homebrew) and Linux using PGXS.

Source and documentation:
https://github.com/chanukyasds/pg_running_stats

Any feedback, testing, or suggestions for improvement would be very welcome.

Thanks,
Chanukya


Re: How do I upsert depending on a second table?

2025-10-18 Thread David G. Johnston
On Tuesday, September 23, 2025, Samuel Marks  wrote:

>
>
$subject

You can only upsert/provoke a meaningful conflict on the singular table
being inserted into.

There are other features like functions and triggers that may get you
something usable.

David J.


Re: pg_hint_tables

2025-10-18 Thread Adrian Klaver

On 9/30/25 12:51, Wong, Kam Fook (TR Technology) wrote:

I am trying to use pg_hint_tables in AWS Aurora Postgres.  I changed the Postgres 
parameter "pg_hint_plan.enable_hint_table = 1".  Then I logout but can't login 
anymore with the following error message.


To be clear it is pg_hint_plan not pg_hint_tables?



“postgres@postgres:[2719]:ERROR:  relation "hint_plan.hints" does not exist at 
character 21”.

I undo the changes, log back in.  Can't find a good resources that lead me to 
the root cause.  So I try a few other things, see below.  But no luck.  Any 
guidance or help is greatly appreciated.


From here:

https://github.com/ossc-db/pg_hint_plan/blob/master/docs/installation.md

Did you do?:

"
Loading pg_hint_plan

pg_hint_plan does not require CREATE EXTENSION. Loading it with a LOAD 
command will activate it and of course you can load it globally by 
setting shared_preload_libraries in postgresql.conf. Or you might be 
interested in ALTER USER SET/ALTER DATABASE SET for automatic loading in 
specific sessions.


postgres=# LOAD 'pg_hint_plan';
LOAD
"


ALTER EXTENSION pg_hint_plan  UPDATE TO '1.4';
GRANT SELECT, INSERT, UPDATE, DELETE ON hint_plan.hints TO PUBLIC;
GRANT USAGE ON SCHEMA hint_plan TO PUBLIC;

Thank you
Kam Fook Wong

--
Adrian Klaver
[email protected]




Re: Upgrade & Rollback plan: My customer requests rollback via old-version standby (13 ↔ 17) — need community advice

2025-10-18 Thread Vu Le (JData - HN)
Thank you very much, Laurenz.
After reading several sources, I can confirm that this approach is
indeed not feasible at all.
I’m planning to prepare a short proposal and report to the customer,
focusing on the major risks they would face rather than trying to
implement it.

If possible, could you please share any additional best practices or
important considerations apart from testing the new version in a
staging environment?

Thank you once again for your guidance.
Wishing you a pleasant weekend ahead!



On Fri, Oct 10, 2025 at 4:01 PM Laurenz Albe  wrote:
>
> On Fri, 2025-10-10 at 15:26 +0700, Vu Le (JData - HN) wrote:
> > I'm currently planning a major version upgrade from PostgreSQL 13.x to
> > 17.x in a production environment.
> >
> > My customer has requested the following rollback approach, and I’d
> > like to confirm if it’s technically feasible or advisable before
> > proceeding.
> >
> > Scenario:
> > 1. They have a **Primary–Standby setup** (streaming replication).
> > 2. Their idea is to **upgrade only the Primary** (to v17) first, while
> > keeping the **Standby** on v13 (the old version).
> >- The upgraded Primary will run read/write traffic for about a week
> > to validate stability.
> >- If any serious issue occurs, the plan is to **switch over**
> > (promote the v13 Standby), adjust IPs, and resume operations there —
> > minimizing downtime.
> > 3. They also asked whether it’s possible for **data generated on the
> > v17 Primary** to still be **replicated back to the v13 Standby**, so
> > that rollback would be fast and without data loss.
> >
> > Constraints:
> > - They **cannot use a Blue/Green or clone-based approach**, because of
> > **limited storage resources**.
> > - They also doesn’t want the old data directory to become outdated
> > (they expects it could stay in sync with the upgraded node).
> > - They only have **UAT and Production environments** (no dedicated Staging).
> >
> > Questions:
> > 1. Is there **any supported or practical method** to replicate data
> > *backward* (from PostgreSQL 17 to 13) — even temporarily, for rollback
> > purposes?
> > 2. If not, what are the **recommended real-world rollback strategies**
> > for a low-downtime upgrade under these constraints?
> > 3. Are there open-source tools or logical replication setups (e.g.,
> > pglogical, Bucardo, etc.) that could safely achieve something similar?
>
> The only way to achieve something like that is to use logical replication.
> You'd have to switch from streaming replication to logical replication:
>
> - create a publication for all tables on the primary
> - turn off the application
> - promote the standby server
> - create a subscription on the former standby with "copy_data = off"
>
> Then you can upgrade the former primary with pg_upgrade --link and
> restart the application.
>
> After that, logical replication will keep the v13 machine updated.
>
> Note that you cannot run any DDL statements on the database after that,
> else replication will break.
>
> You cannot upgrade the standby server, you'll have to discard the data
> directory and start with a new pg_basebackup.
>
> This is all pretty complicated and should be tested well.
> But then, it might be a better idea to invest the testing effort into
> testing the application on PostgreSQL v17, so that you are confident
> that you won't need to downgrade.  That would allow you to use a simpler
> and less risky form of upgrade.
>
> Yours,
> Laurenz Albe



--
Best Regards,

Miles Le | Vu (Mr.)




Re: Direct Major Upgrade (13.7 → 17.6) with pg_upgrade: Clarifying the need for Intermediate Minor Fixes/Scripts

2025-10-18 Thread Vu Le (JData - HN)
Hi Greg,
Much appreciated for the clear confirmation! It’s great to know that
`pg_upgrade` takes care of all structural changes, and that the
remaining "extra works" are indeed a rare manual check. I will follow
your advice and use common sense during the final review. Thanks for
the guidance and the reassurance.


On Wed, Oct 1, 2025 at 6:27 PM Greg Sabino Mullane  wrote:
>
> On Wed, Oct 1, 2025 at 3:47 AM Vu Le (JData - HN)  
> wrote:
>>
>> My question is: When performing a direct jump from 13.7 to 17.6 using 
>> pg_upgrade, do I need to manually identify and execute the cumulative extra 
>> works/fixes from ALL the skipped intermediate versions (13.8, 13.9, 14.x, 
>> 15.x, 16.x)?
>
>
> Yes. Although with common sense, e.g. no need to reindex something twice.
>
>> Is there an official best practice document that clarifies how
>> pg_upgrade handles these "extra works" from skipped minor releases?
>
>
> These are not handled by pg_upgrade, hence the "extra". As you noted above, 
> it's a manual step of reading the release notes and taking steps depending on 
> your particular database. Keep in mind, these steps are very rare.
>
>> Does pg_upgrade implicitly incorporate all necessary structural fixes from 
>> the minor versions between 13.7 and 17.6?
>
>
> Yes
>
>>
>> Has anyone in the community performed a similar large jump and found 
>> unexpected issues due to skipping these intermediate manual steps?
>
>
> I've jumped many versions, many times, for many clients. Just follow the 
> recommended steps and all will be well.
>
> Cheers,
> Greg
>
> --
> Crunchy Data - https://www.crunchydata.com
> Enterprise Postgres Software Products & Tech Support
>


-- 
Best Regards,

Miles Le (Mr.)




Re: Query on Patch and Upgrade History in PostgreSQL

2025-10-18 Thread Adrian Klaver

On 9/24/25 01:17, loganathan P wrote:


Dear All,

How do I find the date and time of applied minor patches and upgrades in 
a PostgreSQL database level?


In addition to looking at package information, as others have mentioned, 
there is:


https://git.postgresql.org/gitweb/?p=postgresql.git

Click on a tag name for when the tag was created.

If you click on the shortlog or log links next to
the tag you get a list of the commits.



Thanks.

Regards,
Loganathan P




--
Adrian Klaver
[email protected]




RE: Index (primary key) corrupt?

2025-10-18 Thread Wim Rouquart
Internal

Hi,

Apologies for the late response, had other fish to fry...

In response to your questions:

> What is full(15.x) version of Postgres are you using?

15.14

>Is it the community version or a fork or SaaS?

Standard release indeed, running on RHAT8

> What do you get for queries below?:

>select * from pg_opclass where oid = 3124;

|oid|opcmethod  |opcname|opcnamespace|opcowner  |opcfamily  
|opcintype  |opcdefault |opckeytype
|3124   |403|int8_ops   |11 |10 
|1976   |20 |true   |0

>select * from pg_opclass where opcname = 'int8_ops';

|oid|opcmethod  |opcname|opcnamespace   |opcowner   
|opcfamily  |opcintype  |opcdefault |opckeytype|
|3124   |403|int8_ops   |11 |10 
|1976   |20 |true   |0 |
|10021  |405|int8_ops   |11 |10 
|1977   |20 |true   |0 |


>> How is the table populated with data?
>>Might be a good idea to find out.

Don't know, and apart from that, there will be no data added anymore to this 
table, seems this db will remain static (it's only a few rows btw)

> Per a comment from Ron, does this lack of export happen every time you dump 
> the table?

Well, yes, until I do the rebuild and then the issue is fixed.





On 9/19/25 06:23, Wim Rouquart wrote:

>> What is the result for the query?:
>
>> select * from pg_index where indexrelid  =
>> 'idx_376814_primary'::regclass;
>
> Name   |Value  |
> ---+---+
> indexrelid |2006873|
> indrelid   |1998823|
> indnatts   |1  |
> indnkeyatts|1  |
> indisunique|true   |
> indnullsnotdistinct|false  |
> indisprimary   |true   |
> indisexclusion |false  |
> indimmediate   |true   |
> indisclustered |false  |
> indisvalid |true   |
> indcheckxmin   |false  |
> indisready |true   |
> indislive  |true   |
> indisreplident |false  |
> indkey |{} |
> indcollation   |{} |
> indclass   |{} |
> indoption  |{} |
> indexprs   |   |
> indpred|   |
>

Hmm, when I do the above on 15.14 I get:

-[ RECORD 1 ]---+---
indexrelid  | 242209
indrelid| 242205
indnatts| 1
indnkeyatts | 1
indisunique | t
indnullsnotdistinct | f
indisprimary| t
indisexclusion  | f
indimmediate| t
indisclustered  | f
indisvalid  | t
indcheckxmin| f
indisready  | t
indislive   | t
indisreplident  | f
indkey  | 1
indcollation| 0
indclass| 3124
indoption   | 0
indexprs| NULL
indpred | NULL

What is full(15.x) version of Postgres are you using?

Is it the community version or a fork or SaaS?

What do you get for queries below?:

select * from pg_opclass where oid = 3124;

select * from pg_opclass where opcname = 'int8_ops';

>
>> How is the table populated with data?

Might be a good idea to find out.

Per a comment from Ron, does this lack of export happen every time you dump the 
table?

>
> Same answer as above.
>


--
Adrian Klaver
[email protected]

Disclaimer 


Re: Postgre and AIO

2025-10-18 Thread Xuneng Zhou
Hi,

On Mon, Sep 29, 2025 at 8:07 PM Weck, Luis  wrote:
>
> This is more of a question of capability and to make me understand how 
> exactly AIO work in Postgres.
>
> Now that AIO landed in v18, I was thinking of a use case which has annoyed me 
> sometimes, which is inserting  lots of data into a table with many indices. 
> What I am specifically “complaining” is that index updating happens one at a 
> time. Would it be possible/make sense to use AIO to do this?
>
> Another thing that happens often is that an index lookup for something like 
> SELECT y FROM tbl WHERE x IN (1,2,…N)  where N is a big number such as 1000 
> or 2000, takes a while, because (at least for versions < 18) it took a long 
> time sequentially reading the index  for each value. I ended up having to 
> split the values into smaller chunks and ran multiple queries in parallel to 
> maintain a lower latency overall.
>
> Anyway, does any of this make sense? Could Postgres extend the use of AIO to 
> such cases?

To my knowledge, AIO for index scan is still in-progress and expected
to land in v19/20 or later?

[1] https://wiki.postgresql.org/wiki/AIO (not stay-up-to date)
[2] 
https://talkingpostgres.com/episodes/what-went-wrong-what-went-right-with-aio-with-andres-freund

Best,
Xuneng




RE: Index (primary key) corrupt?

2025-10-18 Thread Wim Rouquart
Internal

Yes indeed, i just restore the database to before the rebuild.




> So the problem goes away once you’ve reindexed yet you claim it’s consistent? 
> What are you doing to get the problem to recur after you’ve done reindex to 
> make it work?
>
> David

I was assuming the OP has  a dump of the affected condition and is restoring 
(and perhaps re-fixing).  No?

>

Disclaimer 


Re: Can't create a table with vector type as a non-super user

2025-10-18 Thread Adrian Klaver




On 9/30/25 6:35 AM, mrudula attili wrote:

Hello Team,





Concern:
As its a production environment, we are not really happy to give away 
the usage on public schema.


Is there a way we could get the end users make use of the extension 
without granting usage on public schema


In addition to what Tom Lane suggested:

https://github.com/pgvector/pgvector/blob/master/vector.control

relocatable = true

And per here:

https://www.postgresql.org/docs/current/extend-extensions.html#EXTEND-EXTENSIONS-RELOCATION

"
A fully relocatable extension can be moved into another schema at any 
time, even after it's been loaded into a database. This is done with the 
ALTER EXTENSION SET SCHEMA command, which automatically renames all the 
member objects into the new schema. Normally, this is only possible if 
the extension contains no internal assumptions about what schema any of 
its objects are in. Also, the extension's objects must all be in one 
schema to begin with (ignoring objects that do not belong to any schema, 
such as procedural languages). Mark a fully relocatable extension by 
setting relocatable = true in its control file.

"

You could move the extension to a schema that the users do have usage on.




Thanks,
SA





--
Adrian Klaver
[email protected]





Re: PostgreSQL 18 not available for Noble?

2025-10-18 Thread Ray O'Donnell


On 28 September 2025 21:54:01 Adrian Klaver  wrote:


On 9/28/25 13:46, Ray O'Donnell wrote:

Hi all,

As per $subject - is PG 18 not available for Ubuntu Noble (24.04)?


According to the below it is:

https://www.postgresql.org/download/linux/ubuntu/


Hi Adrian,

That's what I though, right enough.


Did you do?:

sudo apt update


Pretty sure I did the laptop is put away now and my marriage will be in 
danger if I take it out again (joking!!), so I'll try again in the morning 
and report back.


Thanks for the response,

Ray.




Re: Are compression requirements needed when building only libraries?

2025-10-18 Thread Laurenz Albe
On Thu, 2025-10-09 at 16:58 +0200, Uilian Ries wrote:
> > My PostgreSQL is built with support for all these libraries, and I find 
> > that my libraries are linked with "libz"
> 
> Which libraries? libpgtypes, libpq, libecpg, libecpg_compat?

libpq, libecpg and libecpg_compat are linked with libz.so.1, libpgtypes isn't.

> > How did you configure and build PostgreSQL?
> 
> I'm using Meson through the Conan package manager.
> 
> You can find my full build log and ldd commands outputs attached.

That shows that you have configured PostgreSQL with support for zlib,
lz4 and zstd.  No idea why the libraries are not linked with libz.so.

Yours,
Laurenz Albe




Re: Alerting on memory use and instance crash

2025-10-18 Thread Rahila Syed
Hi,

The other question I had was , are there any pg_* views using which, we are
> able to see which session/connection is using the highest amount of memory?
> I don't see any such columns in pg_stats_activity
>

>From a purely postgresql database point of view,  this feature is being
developed, you can view it here :  PostgreSQL: Enhancing Memory Context
Statistics Reporting


Basically, this lets you provide the pid of any PostgreSQL process to an
sql function, which then returns its memory usage statistics.
Once this feature is committed, for obtaining memory usage statistics of
any postgresql session you would need to run
SELECT pg_backend_pid() which will give you the pid of the postgresql
backend.
You can then pass it to SELECT pg_get_process_memory_contexts(pid, ..),
which will return the memory consumption data.
This is for future reference.

At the moment, you can use the following function on the connection whose
memory you wish to inspect.
This works only for local connection i.e you can't use this function to
query the statistics of any other
postgresql process or connection.
PostgreSQL: Documentation: 18: 53.5. pg_backend_memory_contexts


Thank you,
Rahila Syed


Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-18 Thread Merlin Moncure
On Fri, Sep 26, 2025 at 8:16 AM Ashish Mukherjee 
wrote:

> Hello,
>
> I have a strange requirement to downgrade from pgsql 17 to pgsql 12. This
> is because we found in production certain incompatibilities between both
> versions for our database. It should have been caught in testing but was
> not.
>

Agree with others that snap downgrade is not necessarily a good choice
here.  Either way, if I were in your shoes, I'd be loading a plain text
dump, maybe with some light massaging to strip out some compatibility
issues.

Can you let us know what the hang up is?  Version upgrades these days are
usually pretty painless except for some performance issues, unless you have
some unusual situations, for example, exotic extensions.

merlin