Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Dominique Devienne
On Mon, Oct 13, 2025 at 9:19 PM David Barsky  wrote:
> > SQLite is fantastic, but it's type-system and lock-model are too 
> > restrictive,
> > for a general DB. Similar to David, I think PostgreSQL is close to my ideal
> > above, yet still far-enough (and perhaps unwilling enough, as a community)
> > to venture into embedded and localhost use-cases, that it's frustrating.
>
> Yup, Dominique understands what I want: Postgres' type system, query planner,
> and locking model, but _shades_ of SQLite's operational properties during 
> local
> development. However, I don't really need Postgres to function like an 
> embedded
> database; I just want the self-contained process cleanup. Connecting to 
> Postgres
> over TCP over localhost is perfect for my use-case

Except postgres is actively hostile to localhost
testing by not supporting ephemeral TCP ports...

Why oh why???
My own servers are unit tested that way.
Why would postgres.exe prevent the use of port 0?

Any work-around that picks a "free" port is racy and ugly.
There's a clean idiom for that, and that's port 0. Why disable it?

I don't get it... --DD

D:\pdgm\trunk\psc2\pgdata>where postgres.exe
D:\pdgm\kits\trunk\postgresql\17.6\Win_x64_10_v17\bin\postgres.exe

D:\pdgm\trunk\psc2\pgdata>where initdb.exe
D:\pdgm\kits\trunk\postgresql\17.6\Win_x64_10_v17\bin\initdb.exe

D:\pdgm\trunk\psc2\pgdata>initdb -D .\test1 -U postgres -A trust
The files belonging to this database system will be owned by user "ddevienne".
...
Success. You can now start the database server using:  pg_ctl -D
^"^.^\test1^" -l logfile start

D:\pdgm\trunk\psc2\pgdata>postgres -D .\test1 -p 0
2025-10-15 09:43:59.293 GMT [42288] FATAL:  0 is outside the valid
range for parameter "port" (1 .. 65535)




Re: PostgreSQL 18 not available for Noble?

2025-10-17 Thread Adrian Klaver

On 9/28/25 14:04, Ray O'Donnell wrote:


On 28 September 2025 21:54:01 Adrian Klaver  
wrote:



On 9/28/25 13:46, Ray O'Donnell wrote:

Hi all,

As per $subject - is PG 18 not available for Ubuntu Noble (24.04)?


According to the below it is:

https://www.postgresql.org/download/linux/ubuntu/


Hi Adrian,

That's what I though, right enough.


I see it here:

https://apt.postgresql.org/pub/repos/apt/dists/noble-pgdg/

Check:

Are the PGDG repos set up in Tuxedo OS?

Or are is Tuxedo using the default Ubuntu repo, which is pinned at 
Postgres 16:


https://packages.ubuntu.com/noble/database/postgresql




Did you do?:

sudo apt update


Pretty sure I did the laptop is put away now and my marriage will be 
in danger if I take it out again (joking!!), so I'll try again in the 
morning and report back.


Thanks for the response,

Ray.





--
Adrian Klaver
[email protected]




Re: Alerting on memory use and instance crash

2025-10-17 Thread Ron Johnson
On Wed, Oct 8, 2025 at 2:58 PM sud  wrote:
[snip]

> Do you mean in normal Postgres it's alway a single instance/memory and
> single storage attached? then I also do not see any such cluster level
> views in aws aurora postgres too?
>

Yup.


> Pardon if it's a silly one to ask.
>

A Google for "what's the difference between Oracle and Postgresql" _might_
help.  I've never done that, so don't know what you'll find.

As far as how Aurora works... you need to ask AWS.  It's been too heavily
modified for a list dedicated to pure/unmodified Postgresql to help.


> On Wed, Oct 8, 2025 at 9:52 PM Adrian Klaver 
> wrote:
>
>> On 10/8/25 08:42, sud wrote:
>> > Hi Experts,
>> >
>> > It's postgres version 16. I have two questions on alerting as below.
>> >
>> > 1)If we want to have alerting on any node/instance that gets crashed :-
>> > In other databases like Oracle the catalog Views like "GV$Instance"
>> used
>> > to give information on whether the instances are currently active/down
>> > or not. But in postgres it seems all the pg_* views are instance
>> > specific and are not showing information on the global/cluster level
>> but
>> > are restricted to instance level only. So is there any other way to
>> > query the pg_* views to have alerts on the specific instance crash?
>>
>> 1) When you say instance do you mean database?
>>
>> 2) Not all system tables/views are database only.
>>
>> For instance:
>> https://www.postgresql.org/docs/current/catalog-pg-database.html
>> https://www.postgresql.org/docs/current/catalog-pg-auth-members.html
>> https://www.postgresql.org/docs/current/catalog-pg-authid.html
>> https://www.postgresql.org/docs/current/view-pg-roles.html
>>
>>
>> > 2)Is there a way to fetch the data from pg_* view to highlight the
>> > specific connection/session/sqls which is using high memory in postgres?
>> >
>> > Appreciate your guidance.
>> >
>> > Regards
>> > Sud
>>
>>
>> --
>> Adrian Klaver
>> [email protected]
>>
>

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread David Barsky
> "Testing" db interaction in a faked, circumscribed
> only-my-stuff-is-there world is folly. Certainly each db developer
> needs their own instance of the database (on their own box or a
> server). And it needs to be kept current with both DDL and domain meta
> data changes (see things like flyway) as regularly as is the source
> code. It should have a decent representation of a production dataset
> else reads and writes will always be fast. All the tests reading and
> writing all the columns of all the tables generates a lot of "green
> lights" but near zero practicable information in the developers' workflow.

This is not the extent of the testing we plan on using. In this email
thread,
I'm only referring to fast unit tests that ensure the correctness of the
application's logic and behavior. These are paired with the proper, hygienic
things you called out.

> > Were I a betting man, I would bet heavily against this community, which
> > prides itself on NOT losing data, allowing an option that would do just
> > that.
> Well, mumble ... we have any number of options that can be read that way.
One
> obvious one is that we don't try to prevent you from putting $PGDATA on a
RAM
> disk. Turning off fsync is another popular way to trade away durability
for
> speed.

We already do both for unit tests, which helps a bunch :).

> But I concur with the point that we're not here to pretend to be an
embedded
> database, as there are other projects that do that better (for example,
our
> good friends at SQLite).

> The advice I'd give the OP is to take a look at our TAP-test
infrastructure.
> We've put a good deal of effort, and are continuing to do so, into letting
> those tests spin up transitory testing databases pretty cheaply.

Thanks! I'll dig in. I'm guessing you're referring to these?
https://www.postgresql.org/docs/current/regress-tap.html

For what it's worth, I don't think Postgres _should_ be an embedded
database,
but I think there are some qualities of embedded databases that I would
love to see in Postgres: namely, the self-contained cleanup. Regardless,
I worry that me introducing SQLite into this discussion was a mistake and
hurt
the coherency of my request, so I apologize for that.

> SQLite is fantastic, but it's type-system and lock-model are too
restrictive,
> for a general DB. Similar to David, I think PostgreSQL is close to my
ideal
> above, yet still far-enough (and perhaps unwilling enough, as a community)
> to venture into embedded and localhost use-cases, that it's frustrating.

Yup, Dominique understands what I want: Postgres' type system, query
planner,
and locking model, but _shades_ of SQLite's operational properties during
local
development. However, I don't really need Postgres to function like an
embedded
database; I just want the self-contained process cleanup. Connecting to
Postgres
over TCP over localhost is perfect for my use-case: the difference in
performance for an in-process database (à la SQLite) vs. connecting over
localhost is the difference between single-digit microseconds and
single-digit
milliseconds. That difference matters in some cases, but not here: as far
as a
human running tests is concerned, both are instant. Here's someone at
CrunchyData/Snowflake providing an experience report of this exact workflow:

https://www.crunchydata.com/blog/dont-mock-the-database-data-fixtures-are-parallel-safe-and-plenty-fast

Anyways, I'll try to get at what motivated this whole discussion: would
there be
community opposition to adding a CLI flag that'd exit/shutdown all Postgres
processes once all pending connections close? E.g., something similar to SQL
Server's `auto_close` in the vein of `postgres
-c "auto_close_after=100"` or `pg-ctl start --exit-mode=smart`?

On Oct 13, 2025 at 9:43:15 AM, Dominique Devienne 
wrote:

> On Mon, Oct 13, 2025 at 5:37 PM Tom Lane  wrote:
>
> Dominique Devienne  writes:
>
> > On Mon, Oct 13, 2025 at 5:08 PM Ron Johnson 
> wrote:
>
> >> Shared memory means that I can stomp all over you, and you can't stop
> me.  That's the antithesis of ACID.
>
>
> > SHM is how SQLite in WAL mode coordinates access to the same DB from
>
> > several connections. So if it's good enough for SQLite, I don't see
>
> > what it would be wrong for PostgreSQL too.
>
>
> SQLite has accepted the cost that comes with being embedded, which is
>
> that application-side memory-stomping bugs can destroy the database.
>
> Postgres is not willing to make that tradeoff.  From a pure
>
> developer's perspective, every time we got a bug report we'd have to
>
> ask "did you observe this while running embedded?" and then demand a
>
> repro that uses a non-embedded database.  We are not going to help
>
> application authors debug their own bugs, especially not when we have
>
> no visibility into what those are.
>
>
> That's true for the embedded case, true.
>
> There are some of those, on the ML (custom SQLite-based forum in
> fact), but not that many in fact, far from it. So that concern 

Re: Are compression requirements needed when building only libraries?

2025-10-17 Thread Laurenz Albe
On Thu, 2025-10-09 at 13:10 +0200, Uilian Ries wrote:
> Greetings!
> 
> I'm building PostgreSQL 17.5 locally, on Ubuntu 22.04, and using a CPU 
> architecture x86_64.
> 
> I noted the meson_options.txt file contains a few optional dependencies, like
> zstd, zlib, lz4, libxml, and libxslt, which mostly seem to be related to 
> compression.
> 
> However, even when I build all libraries using those options enabled, I can't 
> see them
> listed as a dependency of the libraries:
> 
> [...]
>
> So, my question is: Is it correct to affirm I don't need those dependencies 
> when only
> interested in building and consuming the libraries? 
> 
> I'm asking because I don't see a specific option in the meson_option.txt to 
> disable/enable
> producing the executables, but only general options for dependencies.

My PostgreSQL is built with support for all these libraries, and I find that
my libraries are linked with "libz", but not with the other compression
libraries.

How did you configure and build PostgreSQL?

Yours,
Laurenz Albe




Re: Direct Major Upgrade (13.7 → 17.6) with pg_upgrade: Clarifying the need for Intermediate Minor Fixes/Scripts

2025-10-17 Thread Greg Sabino Mullane
On Wed, Oct 1, 2025 at 3:47 AM Vu Le (JData - HN) 
wrote:

> My question is: When performing a direct jump from 13.7 to 17.6 using
> pg_upgrade, do I need to manually identify and execute the cumulative extra
> works/fixes from ALL the skipped intermediate versions (13.8, 13.9, 14.x,
> 15.x, 16.x)?
>

Yes. Although with common sense, e.g. no need to reindex something twice.

Is there an official best practice document that clarifies how
> pg_upgrade handles these "extra works" from skipped minor releases?
>

These are not handled by pg_upgrade, hence the "extra". As you noted above,
it's a manual step of reading the release notes and taking steps depending
on your particular database. Keep in mind, these steps are very rare.

Does pg_upgrade implicitly incorporate all necessary structural fixes from
> the minor versions between 13.7 and 17.6?
>

Yes


> Has anyone in the community performed a similar large jump and found
> unexpected issues due to skipping these intermediate manual steps?
>

I've jumped many versions, many times, for many clients. Just follow the
recommended steps and all will be well.

Cheers,
Greg

--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support


Re: Alerting on memory use and instance crash

2025-10-17 Thread Ron Johnson
On Wed, Oct 8, 2025 at 11:42 AM sud  wrote:

> Hi Experts,
>
> It's postgres version 16. I have two questions on alerting as below.
>
> 1)If we want to have alerting on any node/instance that gets crashed :- In
> other databases like Oracle the catalog Views like "GV$Instance" used to
> give information on whether the instances are currently active/down or not.
> But in postgres it seems all the pg_* views are instance specific and are
> not showing information on the global/cluster level but are restricted to
> instance level only. So is there any other way to query the pg_* views to
> have alerts on the specific instance crash?
>

In Postgresql, cluster == instance.  That's a historical fluke which might
never go away.  Thus, if the cluster is down, you can't access anything.

Connection poolers that use virtual IP addresses and are the modern
definition of "cluster" sit on top of individual PG clusters.  Even though
the pooler auto-fails the (modern) cluster to the replica instance, PG
still thinks one cluster is down, and the former-replica cluster is now the
primary cluster.

Confusing?  Yes.  Just accept that *PG cluster == instance*, and that
*Postgresql
is not Oracle*.


> 2)Is there a way to fetch the data from pg_* view to highlight the
> specific connection/session/sqls which is using high memory in postgres?
>



-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!


JSONB subscripting initializes numeric keys as arrays instead of objects

2025-10-17 Thread Krrish Malhotra
I'm using PostgreSQL 16+ and working extensively with jsonb columns using
JSON subscripting paths (as described here: PostgreSQL docs – jsonb
subscripting
).
I've run into an issue when updating nested paths where intermediate keys
might not exist. For example:

UPDATE test SET data['A']['B']['C'] = '{"a": "b"}' WHERE data->>'ID' = 'abcde';

If A.B doesn’t exist, PostgreSQL automatically initializes it as an empty
JSON object ({}), and then correctly sets the key C. However, if the last
key is numeric, for example:

UPDATE test SET data['A']['B']['3'] = '{"a": "b"}' WHERE data->>'ID' = 'abcde';

PostgreSQL initializes A.B as an empty array instead of an object, and sets
the value at index 3. This behavior isn’t what I want, I’d like numeric
keys to be treated as JSON object keys (e.g., {"3": {...}}) rather than as
array indices. I know I can pre-initialize A.B like this:

data['A']['B'] = coalesce(data->'A'->'B', '{}')

But that causes problems when multiple JSON path updates happen in the same
query, since it can overwrite or reset other keys.

Additionally, in my use case, I don’t always know in advance whether a
given path exists at the time of the update, so I’d like a solution that
won’t break or conflict with existing data.

Is there any way to force PostgreSQL to treat numeric subscripts as object
keys instead of array indices, or otherwise control this initialization
behavior?


Re: Alerting on memory use and instance crash

2025-10-17 Thread sud
Thank you so much. That helps.

I am planning to use pg_stat_get_backend_memory_contexts function something
as below by joining this to the pg_stat_activity. Hope this is the right
usage. Somehow i am getting an error stating the function doesn't exist but
it might be because of the version. I will try with a higher version.

SELECT   pa.pid,
pa.usename,
pa.application_name,
pa.state,
mc.name AS memory_context,
pg_size_pretty(mc.used_bytes) AS used_memory
FROMpg_stat_activity pa
JOIN LATERALpg_stat_get_backend_memory_contexts(pa.pid) mc ON TRUE
WHEREpa.pid <> pg_backend_pid()
ORDER BY mc.used_bytes DESC;

However, is the below query, which was shared by Veem in above email thread
is also going to give similar memory consumption information i.e. Avg
memory consumption per query from pg_stat_statements?

WITH block_size AS (
  SELECT setting::int AS size FROM pg_settings WHERE name = 'block_size'
)
SELECT
  query,
  calls,
  pg_size_pretty(temp_blks_read * bs.size) AS temp_read_in_bytes,
  pg_size_pretty(temp_blks_written * bs.size) AS temp_written_in_bytes
FROM pg_stat_statements, block_size bs
WHERE temp_blks_read > 0 OR temp_blks_written > 0
ORDER BY temp_blks_written DESC
LIMIT 10;


On Fri, Oct 10, 2025 at 4:08 PM Rahila Syed  wrote:

> Hi,
>
> The other question I had was , are there any pg_* views using which, we
>> are able to see which session/connection is using the highest amount of
>> memory? I don't see any such columns in pg_stats_activity
>>
>
> From a purely postgresql database point of view,  this feature is being
> developed, you can view it here :  PostgreSQL: Enhancing Memory Context
> Statistics Reporting
> 
>
> Basically, this lets you provide the pid of any PostgreSQL process to an
> sql function, which then returns its memory usage statistics.
> Once this feature is committed, for obtaining memory usage statistics of
> any postgresql session you would need to run
> SELECT pg_backend_pid() which will give you the pid of the postgresql
> backend.
> You can then pass it to SELECT pg_get_process_memory_contexts(pid, ..),
> which will return the memory consumption data.
> This is for future reference.
>
> At the moment, you can use the following function on the connection whose
> memory you wish to inspect.
> This works only for local connection i.e you can't use this function to
> query the statistics of any other
> postgresql process or connection.
> PostgreSQL: Documentation: 18: 53.5. pg_backend_memory_contexts
> 
>
> Thank you,
> Rahila Syed
>
>
>
>


Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Adrian Klaver

On 10/8/25 12:39, David Barsky wrote:

Hiya folks,

I'm a bit of a newcomer when it comes to PostgreSQL, so I apologize if 
this is

the wrong mailing list. Anyways, my two questions:

1. Is there any interest in adding a command line option to the 
`postgres` CLI
    that shuts down the PostgreSQL instance once (and optionally cleans 
up the

    data directory) once all connections have disconnected?


https://www.postgresql.org/docs/current/app-pg-ctl.html

"
stop mode shuts down the server that is running in the specified data 
directory. Three different shutdown methods can be selected with the -m 
option. “Smart” mode disallows new connections, then waits for all 
existing clients to disconnect. If the server is in hot standby, 
recovery and streaming replication will be terminated once all clients 
have disconnected. “Fast” mode (the default) does not wait for clients 
to disconnect. All active transactions are rolled back and clients are 
forcibly disconnected, then the server is shut down. “Immediate” mode 
will abort all server processes immediately, without a clean shutdown. 
This choice will lead to a crash-recovery cycle during the next server 
start.

"



    a. Alternatively, I wouldn't mind accomplishing this via the single-user
       mode if it could accept the binary/wire protocol in addition to the
       current text protocol.
2. Are there plans for having any additional table access methods beyond 
`HEAP`

    shipping as part of Postgres? I'd love to have something that's purely
    in-memory to bypass the tempdir dance that I'm currently doing.
For context, I'm trying to make it easier to test our application against a
live, actual PostgreSQL instance and make the experience feel a lot like
sqlite's embedded/in-memory workflow. Today, we've gotten really great


Postgres is not an embedded database, if you want that experience then 
use a database that is designed to be embedded.


latencies via test transactions, but I'd also like to ensure that there 
aren't
any orphaned Postgres processes at the end of a test run or without 
requiring

the user to start an instance of Postgres prior to running the tests.

Warmest regards,
David



--
Adrian Klaver
[email protected]




Re: Does Java 8 support drivers 42.2.19 and 42.7.6?

2025-10-17 Thread Adrian Klaver

On 10/6/25 01:01, Артем Романюк wrote:

Good afternoon!

Dear PostgreSQL community. Can I ask the developers if Java 8 officially 
supports drivers 42.2.19 and 42.7.6? It is important that support is 
provided fully and without problems.


1) Why not use the latest minor version?

2) "... provided fully and without problems" is not a guarantee that can 
be made. The best that can be said is it passed the project test suite. 
You will have to test on your setup to verify it works without issue and 
file a report here https://github.com/pgjdbc/pgjdbc/issues if it does not.




Thank you!



--
Adrian Klaver
[email protected]




Re: Can't create a table with vector type as a non-super user

2025-10-17 Thread Tom Lane
mrudula attili  writes:
> But a non super user (a read write user of the database) is not able to
> create a table with vector type and keeps getting the below error.

> CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(3))
> [2025-09-30 09:22:29] [42704] ERROR: type "vector" does not exist
> [2025-09-30 09:22:29] Position: 57

Well, you have to grant usage on whatever schema the type is in.

> As its a production environment, we are not really happy to give away the
> usage on public schema.

This seems like a very strange requirement.  What are you keeping in
"public" that you don't want to be generally available in that
database, and why?  You do understand the difference between USAGE
and CREATE privileges for schemas, right?

> Is there a way we could get the end users make use of the extension without
> granting usage on public schema

You could put it in some other schema, but then users would have to
adjust their search_path or name the schema explicitly.  The design
expectation is that you use the public schema for stuff that should
be available to all SQL users, and put stuff that needs more
protection in some other schema(s).

There is a reasonable debate about whether giving out CREATE privilege
on the public schema is a good idea (probably not, if you have
not-fully-trustworthy users).  But I'm having a hard time seeing why
you'd not want to give out USAGE.

regards, tom lane




Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Rob Sargent



On 10/12/25 16:10, David Barsky wrote:

> stop mode shuts down the server that is running in the specified data >
> directory. Three different shutdown methods can be selected with the 
-m >
> option. “Smart” mode disallows new connections, then waits for all > 
existing

> clients to disconnect. If the server is in hot standby, > recovery and
> streaming replication will be terminated once all clients > have 
disconnected.

> “Fast” mode (the default) does not wait for clients > to disconnect. All
> active transactions are rolled back and clients are > forcibly 
disconnected,

> then the server is shut down. “Immediate” mode > will abort all server
> processes immediately, without a clean shutdown. > This choice will 
lead to a

> crash-recovery cycle during the next server > start.

Ah, I missed this, thanks! I'm still new to this and unsure when I 
should use

`postgres` vs. `pg_ctl`. I can probably hack something together with this!

> Postgres is not an embedded database, if you want that experience then
> use a database that is designed to be embedded.

That's fair, especially from an operational standpoint. However, I _think_
Postgres can get really close to an embedded database's development 
experience

by doing a few tricks that I'll elaborate on later on in this email.

> > I think OP is looking for AUTO_CLOSE, like SQL Server (and Rdb/VMS 
before
> > it). Its only real utility is OP's use-case: a Windows desktop 
running local

> > testing.
> > We in the shell scripting daemon world don't think like that.
> > From the original post:

> "Is there any interest in adding a command line option to the `postgres`
> CLI"
> Which I took to mean:
>
> https://www.postgresql.org/docs/current/app-postgres.html

I think Ron's interpretation is correct, but I also don't mind using 
`pg_ctl`!

And yes, the thing I'm looking for looks pretty similar to SQL Server's
`AUTO_CLOSE`.

More concretely, the desiderata are (some are more flexible then others):

1. Our test runner runs each test as a standalone process. While it 
can _setup_

   a test environment atomically, it can't tear down a test environment
   atomically. I think this is reasonable stance on the part of the 
test runner

   to encourage reliable test suites.
2. We started by using SQLite, which has the _really nice_ property of 
being
   able to function entirely in-memory. This means that when the test 
completes,

   cleanup of the entire database occurs due to the operating system
   deallocating the test process' memory; no orphaned processes to 
think about.
3. After someone installs all the tools that they need for their 
development
   environment (language toolchains, editor, database), they shouldn’t 
need to
   do any additional, ongoing maintenance. Having experienced a 
workflow where

   the entire build/test process is almost entirely self-contained, the
   productivity benefits are massive and I really don’t want to go back.
   1. There's an additional benefit here: we're able to unit test 
against the
      actual database we're running against in production with 
complete fidelity
      (some people might say that that these are really integration 
tests, but
      if each test completes in 0.02 milliseconds and scales to use 
all cores on

      my machine, I consider them to be _morally_ unit tests)
By "against the actual database..in production" do you mean the server 
type (e.g. postgres) or a verbatim data set?  I am assuming the former.  
Also assuming this isn't the application code hitting the server directly.


I'm pretty sure I want the following behavior from Postgres (this is 
the part I

referred to above that would get Postgres pretty close to the development
experience of an embedded database!):

1. On test, create or connect to an existing Postgres instance. Since 
each test

   is its own standalone process, I think something shaped like optimistic
   locking to launch Postgres at a given port suffices. The operating 
system
   will complain if two processes are launched the same port and the 
OS holding

   the lock on the port should prevent any TOCTOU bugs.
2. Each test runs their own set of test transactions, which are 
automatically

   rolled back at the end of each test.
3. Postgres does some sort of connection-based reference counting 
after the

   first connection. Once all connections close and a short timeout window
   passes (e.g., 100ms, but it should probably be configurable?) 
Postgres shuts

   down and cleans up any on-disk data.


"Testing" db interaction in a faked, circumscribed 
only-my-stuff-is-there world is folly.  Certainly each db developer 
needs their own instance of the database (on their own box or a 
server).  And it needs to be kept current with both DDL and domain meta 
data changes (see things like flyway) as regularly as is the source 
code.  It should have a decent representation of a production dataset 
else reads and writes will always be fast.  All the tests reading and 

Re: Enquiry about TDE with PgSQL

2025-10-17 Thread rainer

Am 2025-10-17 15:12, schrieb Greg Sabino Mullane:

On Fri, Oct 17, 2025 at 12:49 AM Ron Johnson
 wrote:


But filesystem encryption still means that validly logged-in users
see the unencrypted data.  That's great for a laptop that might get
stolen, or for drives that are discarded without being wiped, but
are no protection against hackers who want to exfiltrate your data.


I stand by my recommendation. If someone is logged in and has access
to your data directory (e.g. is root or postgres user), then they also
have the TDE key or some easy way to bypass it.


TDE was added to SQL Server, with (to us, at least)
minimally-noticed overhead.  Oracle has it, too, but I don't know
the details.
The bottom line is that requirements for TDE are escalating, whether
you like it or not


I'm not arguing against putting TDE in Postgres - indeed, I am all for
that. But it's a very tricky thing to do technically, with minimal
benefits other than "checking the box" of some security requirements
document.


The bottom line is that requirements for TDE are escalating, whether
you like it or not, as Yet Another Layer Of Defense against hackers
exfiltrating data, and then threatening to leak it to the public.


I'd love to see a real-world example where TDE would have saved
someone but disk encryption could not.



At least with Oracle and TDE, the rman backups are encrypted, too.
So, you cannot just download the dumps from a file-share and import them 
somewhere else (or if you get hold of the tapes, use those to restore 
the files).
It's not really about protecting the files from a hacker but from a 
person inside the organization who wants to sell the data.


TDE really only (IMHO) makes sense when you also have a HSM.


Now, I've seen a case where a customer had Oracle + Dataguard + TDE + 
HSM.


The thing is, you rarely interact with the HSM normally. So, it's 
getting a bit hazy in that department when you have to add a new 
secret...plus, while Oracle and the HSM vendor will happily license you 
the feature, they will happily point their fingers at each other in case 
a problem arrives.


In this case, the HSM was wiped clear of all the keys in there, in an 
attempt to add a new key.

Oracle closed the wallets and the database was offline.

Dang!

Now, you had fix figures worth of hardware doing exactly nothing.

Due to incredibly lucky circumstances, it was possible to recover the 
database without data loss.


So, with TDE, you really need to be sure you actually know what you're 
doing.






Re: executing Linux commands from the PostgreSQL server

2025-10-17 Thread Juan Rodrigo Alejandro Burgos Mella
Hello Matthias, the only way is to remove SUPERUSER privileges from the
user in question.

ALTER ROLE username WITH NOSUPERUSER;

If you do not have sufficient privileges, the database will display the
following error:

ERROR: must be superuser to COPY to or from an external program

Atte.
JRBM

El mar, 23 sept 2025 a las 6:55, Matthias Apitz ()
escribió:

>
> Hello,
>
> The other way I detected that the PostgreSQL user 'postgres' (or any
> other user who can use the COPY ... FROM PROGRAM command) can do with SQL
>
> CREATE TABLE cmd_exec(cmd_output varchar(10));
> COPY cmd_exec FROM PROGRAM 'df -kh ; exit 0';
> select * from cmd_exec;
>
> Is there a way to avoid this?
>
> matthias
>
> --
> Matthias Apitz, ✉ [email protected], http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> Annalena Baerbock: "We are fighting a war against Russia ..." (25.1.2023)
>
> I, Matthias, I am not at war with Russia.
> Я не воюю с Россией.
> Ich bin nicht im Krieg mit Russland.
>
>
>


Must to have and good to have extensions

2025-10-17 Thread veem v
Hello Experts,
I am comparatively new to postgres database and we are starting to use on
premise postgres and also AWS aurora postgres for our applications. I know
there are many extensions which are kind of additional features which by
default do not come with the installations. There are many such extensions
in postgres available. But I want to understand from experts here , are
there a list of extensions which one must have and which are good to have
for on premise postgres and aws postgres?

Regards
Veem


Postgre and AIO

2025-10-17 Thread Weck, Luis
This is more of a question of capability and to make me understand how exactly 
AIO work in Postgres.

Now that AIO landed in v18, I was thinking of a use case which has annoyed me 
sometimes, which is inserting  lots of data into a table with many indices. 
What I am specifically “complaining” is that index updating happens one at a 
time. Would it be possible/make sense to use AIO to do this?

Another thing that happens often is that an index lookup for something like 
SELECT y FROM tbl WHERE x IN (1,2,…N)  where N is a big number such as 1000 or 
2000, takes a while, because (at least for versions < 18) it took a long time 
sequentially reading the index  for each value. I ended up having to split the 
values into smaller chunks and ran multiple queries in parallel to maintain a 
lower latency overall.

Anyway, does any of this make sense? Could Postgres extend the use of AIO to 
such cases?


Re: Must to have and good to have extensions

2025-10-17 Thread Adrian Klaver

On 10/17/25 08:20, veem v wrote:

Hello Experts,
I am comparatively new to postgres database and we are starting to use 
on premise postgres and also AWS aurora postgres for our applications. I 
know there are many extensions which are kind of additional features 
which by default do not come with the installations. There are many such 
extensions in postgres available. But I want to understand from experts 
here , are there a list of extensions which one must have and which are 
good to have for on premise postgres and aws postgres?


Not really as that is highly dependent on what you are using Postgres for.

The extensions that ship as contrib from the community Postres can be 
found here:


https://www.postgresql.org/docs/current/contrib.html

For what Aurora Postgres allows see:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Extensions.html




Regards
Veem



--
Adrian Klaver
[email protected]




Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Bird



On October 14, 2025 9:40:45 PM UTC, David Barsky  wrote:
>> If testing is all scripted, then why not put "pg_ctl stop" at the end of
>the script?
>
>Sorry for the delay.
>
>It’s _mostly_ scripted, but two major reasons:
>
>1. If that script is cancelled or interrupted for any reason, it’s possible
>that
>   `pg_ctl stop` won't be called and I'd have a leaked process.

bash has EXIT trap you can use to run functions even in the case of interrupts; 
You can create a wrapper script if its not written in bash.

 I could
>mitigate
>   this by calling `pg_ctl stop` at the *start* of the script, but that
>adds a
>   bit of latency I'd prefer to avoid.

You could also run pg_ctl stop in the background (i.e. in another process). 
Again, if using bash, you just add & at the end. It should be possible to 
create processes in any scripting language.




Re: Enquiry about TDE with PgSQL

2025-10-17 Thread Greg Sabino Mullane
On Fri, Oct 17, 2025 at 12:49 AM Ron Johnson 
wrote:

But filesystem encryption still means that validly logged-in users see the
> unencrypted data.  That's great for a laptop that might get stolen, or for
> drives that are discarded without being wiped, but are no protection
> against hackers who want to exfiltrate your data.


I stand by my recommendation. If someone is logged in and has access to
your data directory (e.g. is root or postgres user), then they also have
the TDE key or some easy way to bypass it.

TDE was added to SQL Server, with (to us, at least) minimally-noticed
> overhead.  Oracle has it, too, but I don't know the details.
> The bottom line is that requirements for TDE are escalating, whether you
> like it or not


I'm not arguing against putting TDE in Postgres - indeed, I am all for
that. But it's a very tricky thing to do technically, with minimal benefits
other than "checking the box" of some security requirements document.

The bottom line is that requirements for TDE are escalating, whether you
> like it or not, as Yet Another Layer Of Defense against hackers
> exfiltrating data, and then threatening to leak it to the public.
>

I'd love to see a real-world example where TDE would have saved someone but
disk encryption could not.

-- 
Cheers,
Greg

--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support


Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Dominique Devienne
On Mon, Oct 13, 2025 at 5:37 PM Tom Lane  wrote:
> Dominique Devienne  writes:
> > On Mon, Oct 13, 2025 at 5:08 PM Ron Johnson  wrote:
> >> Shared memory means that I can stomp all over you, and you can't stop me.  
> >> That's the antithesis of ACID.
>
> > SHM is how SQLite in WAL mode coordinates access to the same DB from
> > several connections. So if it's good enough for SQLite, I don't see
> > what it would be wrong for PostgreSQL too.
>
> SQLite has accepted the cost that comes with being embedded, which is
> that application-side memory-stomping bugs can destroy the database.
> Postgres is not willing to make that tradeoff.  From a pure
> developer's perspective, every time we got a bug report we'd have to
> ask "did you observe this while running embedded?" and then demand a
> repro that uses a non-embedded database.  We are not going to help
> application authors debug their own bugs, especially not when we have
> no visibility into what those are.

That's true for the embedded case, true.

There are some of those, on the ML (custom SQLite-based forum in
fact), but not that many in fact, far from it. So that concern does
exist, but maybe not to the extent one fears.

But not for the localhost case, which remains "client-server"
(multi-process). And SHM is then one of the options for the
"transport" between the libpq-based client, and the backends (running
on localhost). Unix Socket on Linux is almost perfect for the
localhost case, but again, is not portable. And I'd need simplified
authN, on the fly start if necessary, that kind of thing. Our apps are
multi-process themselves too, and each process can also be
multi-connection. In the localhost case, the data is private to you,
but can still be accessed concurrently across connections (from one or
more processes). And in that case, we shouldn't have to deal with
passwords, and everything should run as the OS user.

> > SQLite is also ACID.
>
> I guess they have a different set of assumptions about what that
> buzzword means.

As you wrote, there are existing footguns one can turn on to weaken
ACID already. PostgreSQL is superior to SQLite in many ways. I get
that the embedded use-case is a step too far, for a long time, but the
localhost case, for testing but also for localhost private-data
serving (a possibly cache of a larger remote server) is much more
attainable. And valuable IMHO.




Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Adrian Klaver

On 10/11/25 19:56, Ron Johnson wrote:
On Sat, Oct 11, 2025 at 7:11 PM Adrian Klaver > wrote:




I think OP is looking for AUTO_CLOSE, like SQL Server (and Rdb/VMS 
before it).  Its only real utility is OP's use-case: a Windows 
desktop running local testing.


From the original post:

"Is there any interest in adding a command line option to the `postgres`
CLI"

Which I took to mean:

https://www.postgresql.org/docs/current/app-postgres.html



We in the shell scripting daemon world don't think like that.

--
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!



--
Adrian Klaver
[email protected]




Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-17 Thread Pavan Kumar
Hello Ashish Mukherjee,

Did you get a chance to disable JIT related parameters after  PG 17
upgrade ?

If your database size in TB , try to consider pglogical bi directional
replication. you will have 17.x version and 12.x version. if you ran in to
any issue you can always switch back to old version.
I am not sure TDE supported bi directional replication



On Tue, Sep 30, 2025 at 3:23 AM Ashish Mukherjee 
wrote:

> Thank you all for your inputs.
>
> Well, Percona TDE was leading to the queries being very inefficient / slow
> after upgrading to pgsql 17. Explain analyze shows that query planning time
> shoots up crazily. A decision was taken to go back to pgsql 12, which
> worked out fine as there was no incompatibility. I restored from the binary
> dump with the -j option, as our database is huge. I completely agree that
> downgrade is not a good option but a pragmatic one under the circumstances.
>
> Now the consideration is to use some other encryption option for the
> database which will work fine on pgsql 17. Cybertec's technology is one
> route, the other is EDB. I am happy to hear experiences of folks here with
> pgsql encryption options for v17 on large databases (2.5T in our case).
>
> On Mon, Sep 29, 2025 at 5:10 AM Merlin Moncure  wrote:
>
>> On Fri, Sep 26, 2025 at 8:16 AM Ashish Mukherjee <
>> [email protected]> wrote:
>>
>>> Hello,
>>>
>>> I have a strange requirement to downgrade from pgsql 17 to pgsql 12.
>>> This is because we found in production certain incompatibilities between
>>> both versions for our database. It should have been caught in testing but
>>> was not.
>>>
>>
>> Agree with others that snap downgrade is not necessarily a good choice
>> here.  Either way, if I were in your shoes, I'd be loading a plain text
>> dump, maybe with some light massaging to strip out some compatibility
>> issues.
>>
>> Can you let us know what the hang up is?  Version upgrades these days are
>> usually pretty painless except for some performance issues, unless you have
>> some unusual situations, for example, exotic extensions.
>>
>> merlin
>>
>>

-- 



*Regards,#!  Pavan Kumar--*-
*Sr. Database Administrator..!*
*NEXT GENERATION PROFESSIONALS, LLC*
*Cell#  267-799-3182 #  pavan.dba27 (Gtalk)  *
*India   # 9000459083*

*Take Risks; if you win, you will be very happy. If you lose you will be
Wise  *


Re: Upgrade & Rollback plan: My customer requests rollback via old-version standby (13 ↔ 17) — need community advice

2025-10-17 Thread Laurenz Albe
On Sat, 2025-10-11 at 23:02 +0700, Vu Le (JData - HN) wrote:
> If possible, could you please share any additional best practices or
> important considerations apart from testing the new version in a
> staging environment?

All I can think of is:

- Always keep a backup around.

- Read the "Migration to version x" section of the release notes of v14,
  v15, v16 and v17 before you test the application.  That will give you
  ideas what areas to test particularly well.

- Test your administrative procedures too.  For example, v15 removed
  the exclusive online file system backup.

Yours,
Laurenz Albe




Re: Upgrade & Rollback plan: My customer requests rollback via old-version standby (13 ↔ 17) — need community advice

2025-10-17 Thread Bruce Momjian
On Sat, Oct 11, 2025 at 11:02:43PM +0700, Vu Le (JData - HN) wrote:
> Thank you very much, Laurenz.
> After reading several sources, I can confirm that this approach is
> indeed not feasible at all.
> I’m planning to prepare a short proposal and report to the customer,
> focusing on the major risks they would face rather than trying to
> implement it.
> 
> If possible, could you please share any additional best practices or
> important considerations apart from testing the new version in a
> staging environment?

Yeah, the odds of needing to revert to older Postgres releases is
minimal, so there isn't much discussion or infrastructure to make it
easy.  I think we do pretty well with upgrading.  ;-)

I know there are other unnamed databases where downgrading is a common
need.

---

> 
> Thank you once again for your guidance.
> Wishing you a pleasant weekend ahead!
> 
> 
> 
> On Fri, Oct 10, 2025 at 4:01 PM Laurenz Albe  wrote:
> >
> > On Fri, 2025-10-10 at 15:26 +0700, Vu Le (JData - HN) wrote:
> > > I'm currently planning a major version upgrade from PostgreSQL 13.x to
> > > 17.x in a production environment.
> > >
> > > My customer has requested the following rollback approach, and I’d
> > > like to confirm if it’s technically feasible or advisable before
> > > proceeding.
> > >
> > > Scenario:
> > > 1. They have a **Primary–Standby setup** (streaming replication).
> > > 2. Their idea is to **upgrade only the Primary** (to v17) first, while
> > > keeping the **Standby** on v13 (the old version).
> > >- The upgraded Primary will run read/write traffic for about a week
> > > to validate stability.
> > >- If any serious issue occurs, the plan is to **switch over**
> > > (promote the v13 Standby), adjust IPs, and resume operations there —
> > > minimizing downtime.
> > > 3. They also asked whether it’s possible for **data generated on the
> > > v17 Primary** to still be **replicated back to the v13 Standby**, so
> > > that rollback would be fast and without data loss.
> > >
> > > Constraints:
> > > - They **cannot use a Blue/Green or clone-based approach**, because of
> > > **limited storage resources**.
> > > - They also doesn’t want the old data directory to become outdated
> > > (they expects it could stay in sync with the upgraded node).
> > > - They only have **UAT and Production environments** (no dedicated 
> > > Staging).
> > >
> > > Questions:
> > > 1. Is there **any supported or practical method** to replicate data
> > > *backward* (from PostgreSQL 17 to 13) — even temporarily, for rollback
> > > purposes?
> > > 2. If not, what are the **recommended real-world rollback strategies**
> > > for a low-downtime upgrade under these constraints?
> > > 3. Are there open-source tools or logical replication setups (e.g.,
> > > pglogical, Bucardo, etc.) that could safely achieve something similar?
> >
> > The only way to achieve something like that is to use logical replication.
> > You'd have to switch from streaming replication to logical replication:
> >
> > - create a publication for all tables on the primary
> > - turn off the application
> > - promote the standby server
> > - create a subscription on the former standby with "copy_data = off"
> >
> > Then you can upgrade the former primary with pg_upgrade --link and
> > restart the application.
> >
> > After that, logical replication will keep the v13 machine updated.
> >
> > Note that you cannot run any DDL statements on the database after that,
> > else replication will break.
> >
> > You cannot upgrade the standby server, you'll have to discard the data
> > directory and start with a new pg_basebackup.
> >
> > This is all pretty complicated and should be tested well.
> > But then, it might be a better idea to invest the testing effort into
> > testing the application on PostgreSQL v17, so that you are confident
> > that you won't need to downgrade.  That would allow you to use a simpler
> > and less risky form of upgrade.
> >
> > Yours,
> > Laurenz Albe
> 
> 
> 
> --
> Best Regards,
> 
> Miles Le | Vu (Mr.)
> 
> 

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Do not let urgent matters crowd out time for investment in the future.




Re: Quick questions about postgres name?

2025-10-17 Thread Álvaro Herrera
On 2025-Sep-23, Tom Lane wrote:

> "Dan Mahoney (Gushi)"  writes:
> > It makes mention of the original professor who spearheaded the project 
> > under a DARPA grant, but it would probably do well to know where the name 
> > came from.
> 
> You should read Joe Hellerstein's recollections [1].  But tl;dr: the
> origin of the name is the old INGRES database system; Postgres was
> then named Post-inGres.

INGRES itself is an acronym, and reportedly it has nothing to do with
the French painter; quoth
https://ariel.its.unimelb.edu.au/~yuan/ingres/ingres.faq.html:

|   Trivium: INGRES is an acronym for INteractive Graphics REtrieval System
|   (revealing the nature of the project out of which the experiments with
|   relational databases arose).  By happy accident, there was also a
|   French artist by the same name: Jean Auguste Dominique Ingres
|   (1780-1867).  (A highly placed source who wishes to remain anonymous
|   confirms that the selection of the name WAS an accident.)



> There are also some references to PostQUEL referring to the ancient
> QUEL (QUEry Language) project [2].

The same documented cited above says:

|   01.005 Does OpenIngres support SQL?

|   OpenIngres supports SQL.  OpenIngres 1.x is compliant with the ANSI/ISO
|   Entry-Level SQL92 (SQL2) standard and also has some of the
|   Intermediate-Level features.  Embedded SQL and embedded Dynamic SQL are
|   also fully supported.  (See section 05.003 below for SQL2 references.)

|   Note that OpenIngres also supports QUEL and embedded QUEL, which it
inherits from University Ingres.  Computer Associates de-emphasizes
QUEL for obvious commercial reasons.  Because QUEL has considerable
|   technical advantages over SQL, OpenIngres adopted SQL relatively late
|   (c.  1986) and perhaps for this reason there is a lingering
|   misapprehension that OpenIngres still does not support SQL.

University Ingres supports only QUEL and embedded QUEL.  (See 03.006
for a description of `onyx' which provides an SQL to QUEL interface for
University Ingres.)

-- 
Álvaro Herrera PostgreSQL Developer  —  https://www.EnterpriseDB.com/
"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente"




High latency and profiling

2025-10-17 Thread Roberto Nunnari

Hello. I was notified about high latency on one of our PostgreSQL clusters. This latency is observed from the application and has been increasing. 
There is no indication about the exact database as the application uses several DBs, but the largest and most used is 'mydb'. So, a couple of months 
ago I installed pg_profile and configured it as follows: shared_preload_libraries = 'pg_stat_statements,dblink,pg_stat_kcache,pg_wait_sampling' 
pg_profile.max_sample_age = 90 \c postgres CREATE SCHEMA extensions; CREATE EXTENSION dblink SCHEMA extensions; CREATE EXTENSION pg_profile SCHEMA 
extensions; \c mydb CREATE SCHEMA extensions; CREATE EXTENSION dblink SCHEMA extensions; CREATE EXTENSION pg_stat_statements SCHEMA extensions; 
CREATE EXTENSION pg_stat_kcache schema extensions; CREATE EXTENSION pg_wait_sampling schema extensions; CREATE EXTENSION pg_profile SCHEMA 
extensions; select extensions.set_server_size_sampling('local','23:00+01',interval '2 hour',interval '8 hour'); CREATE USER collector_user with 
password ''; GRANT pg_read_all_stats TO collector_user; grant connect on database mydb to collector_user; ALTER ROLE collector_user SET search_path = 
extensions,public; The postgres user had the following crontab: 13,43 * * * * psql -d mydb -c 'SELECT extensions.take_sample()' > /dev/null 
2>&1 I made sure autovacuum works correctly and after that, just to make sure, I run vacuum analyze on all DBs. This has not lead to any 
notable improvement in the latency as seen by the application. I access pg_profile data from grafana, but I cannot see anything that looks wrong. 
Overall Hit Ratio is always between 99.3% and 100%. The only strange thing I see in grafana, are the following: Overall statement times sais "No 
data" and 'db query error: pq: column "blk_read_time" does not exist'. Also the graph "Statement time" says "No 
data" and "db query error: pq: column sst.blk_read_time does not exist". Wait event types (statements) graph: N/A was quite stable at 
about 0.200 s/s until 2nd september and then grow regularly and sharply up 4 s/s until 22nd september when it dropped back to 0.7 s/s and is stable 
at that value till now. Developers report that their software is already optimized and database size is not a problem. They suggest the issue might 
be inside PostgreSQL itself. Could you help me understand what is causing this? In particular, what does the N/A data in the "Wait event types 
(statements)" graph represent? Thank you and best regards. Robi

Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Tom Lane
Greg Sabino Mullane  writes:
> On Mon, Oct 13, 2025 at 3:19 PM David Barsky  wrote:
>> Anyways, I'll try to get at what motivated this whole discussion: would
>> there be community opposition to adding a CLI flag that'd exit/shutdown all
>> Postgres processes once all pending connections close?

> I don't know about opposition, per se, but the onus is on you to provide a
> strong use case not already covered by existing tools.

In particular, it's not terribly clear why the existing "smart"
shutdown mode isn't sufficient.

regards, tom lane




Re: Option on `postgres` CLI to shutdown when there are no more active connections?

2025-10-17 Thread Tom Lane
Dominique Devienne  writes:
> On Mon, Oct 13, 2025 at 5:08 PM Ron Johnson  wrote:
>> Shared memory means that I can stomp all over you, and you can't stop me.  
>> That's the antithesis of ACID.

> SHM is how SQLite in WAL mode coordinates access to the same DB from
> several connections. So if it's good enough for SQLite, I don't see
> what it would be wrong for PostgreSQL too.

SQLite has accepted the cost that comes with being embedded, which is
that application-side memory-stomping bugs can destroy the database.
Postgres is not willing to make that tradeoff.  From a pure
developer's perspective, every time we got a bug report we'd have to
ask "did you observe this while running embedded?" and then demand a
repro that uses a non-embedded database.  We are not going to help
application authors debug their own bugs, especially not when we have
no visibility into what those are.

> SQLite is also ACID.

I guess they have a different set of assumptions about what that
buzzword means.

regards, tom lane




Re: Alerting on memory use and instance crash

2025-10-17 Thread sud
Thank you.

The other question I had was , are there any pg_* views using which, we are
able to see which session/connection is using the highest amount of memory?
I don't see any such columns in pg_stats_activity.

On Thu, Oct 9, 2025 at 12:37 AM Ron Johnson  wrote:

> On Wed, Oct 8, 2025 at 2:58 PM sud  wrote:
> [snip]
>
>> Do you mean in normal Postgres it's alway a single instance/memory and
>> single storage attached? then I also do not see any such cluster level
>> views in aws aurora postgres too?
>>
>
> Yup.
>
>
>> Pardon if it's a silly one to ask.
>>
>
> A Google for "what's the difference between Oracle and Postgresql" _might_
> help.  I've never done that, so don't know what you'll find.
>
> As far as how Aurora works... you need to ask AWS.  It's been too heavily
> modified for a list dedicated to pure/unmodified Postgresql to help.
>
>
>> On Wed, Oct 8, 2025 at 9:52 PM Adrian Klaver 
>> wrote:
>>
>>> On 10/8/25 08:42, sud wrote:
>>> > Hi Experts,
>>> >
>>> > It's postgres version 16. I have two questions on alerting as below.
>>> >
>>> > 1)If we want to have alerting on any node/instance that gets crashed
>>> :-
>>> > In other databases like Oracle the catalog Views like "GV$Instance"
>>> used
>>> > to give information on whether the instances are currently active/down
>>> > or not. But in postgres it seems all the pg_* views are instance
>>> > specific and are not showing information on the global/cluster level
>>> but
>>> > are restricted to instance level only. So is there any other way to
>>> > query the pg_* views to have alerts on the specific instance crash?
>>>
>>> 1) When you say instance do you mean database?
>>>
>>> 2) Not all system tables/views are database only.
>>>
>>> For instance:
>>> https://www.postgresql.org/docs/current/catalog-pg-database.html
>>> https://www.postgresql.org/docs/current/catalog-pg-auth-members.html
>>> https://www.postgresql.org/docs/current/catalog-pg-authid.html
>>> https://www.postgresql.org/docs/current/view-pg-roles.html
>>>
>>>
>>> > 2)Is there a way to fetch the data from pg_* view to highlight the
>>> > specific connection/session/sqls which is using high memory in
>>> postgres?
>>> >
>>> > Appreciate your guidance.
>>> >
>>> > Regards
>>> > Sud
>>>
>>>
>>> --
>>> Adrian Klaver
>>> [email protected]
>>>
>>
>
> --
> Death to , and butter sauce.
> Don't boil me, I'm still alive.
>  lobster!
>


Re: Downgrade pgsql 17 to pgsql 12 question

2025-10-17 Thread Merlin Moncure
On Tue, Sep 30, 2025 at 3:47 AM Laurenz Albe 
wrote:

> On Tue, 2025-09-30 at 13:53 +0530, Ashish Mukherjee wrote:
> > Now the consideration is to use some other encryption option for the
> > database which will work fine on pgsql 17. Cybertec's technology is
> > one route, the other is EDB. I am happy to hear experiences of folks
> > here with pgsql encryption options for v17 on large databases
> > (2.5T in our case).
>
> I will refrain from making a recommendation, but you should know that
> data-at-rest encryption will always incur a certain performance penalty.
>
> Whatever solution you choose, you should run performance tests.
>

Yeah.  This applies to database upgrades in general.

The lists have quite a bit of, I upgraded, and "query X that drives my
platform now is 100x slower".  I don't think this suggests that postgres is
getting worse; foundationally, it's mostly getting faster, but simply that
the planner changes how it responds to certain conditions.  Over time, I've
learned some painful lessons and try to write SQL that is less risky from a
performance standpoint.

Developers tend to optimize into fragile planner behaviors chasing
performance, sometimes without realizing it.  These kinds of issues are
almost always very fixable assuming you can modify the SQL or the thing
writing the SQL. The takeaway here is: test/verify before making major
upgrades, and bake in some recalibration time.  Adding encryption or other
major features strongly reinforces that need.

merlin


Re: Alerting on memory use and instance crash

2025-10-17 Thread Adrian Klaver

On 10/8/25 08:42, sud wrote:

Hi Experts,

It's postgres version 16. I have two questions on alerting as below.

1)If we want to have alerting on any node/instance that gets crashed :- 
In other databases like Oracle the catalog Views like "GV$Instance" used 
to give information on whether the instances are currently active/down 
or not. But in postgres it seems all the pg_* views are instance 
specific and are not showing information on the global/cluster level but 
are restricted to instance level only. So is there any other way to 
query the pg_* views to have alerts on the specific instance crash?


1) When you say instance do you mean database?

2) Not all system tables/views are database only.

For instance:
https://www.postgresql.org/docs/current/catalog-pg-database.html
https://www.postgresql.org/docs/current/catalog-pg-auth-members.html
https://www.postgresql.org/docs/current/catalog-pg-authid.html
https://www.postgresql.org/docs/current/view-pg-roles.html


2)Is there a way to fetch the data from pg_* view to highlight the 
specific connection/session/sqls which is using high memory in postgres?


Appreciate your guidance.

Regards
Sud



--
Adrian Klaver
[email protected]




High latency and profiling

2025-10-17 Thread Roberto Nunnari

Hello.

Hope this email will keep the formatting this time..

I was notified about high latency on one of our PostgreSQL clusters.
This latency is observed from the application and has been increasing. 
There is no indication about the exact database as the application uses 
several DBs, but the largest and most used is 'mydb'.
So, a couple of months ago I installed pg_profile and configured it as 
follows:


shared_preload_libraries = 
'pg_stat_statements,dblink,pg_stat_kcache,pg_wait_sampling'

pg_profile.max_sample_age = 90

\c postgres
CREATE SCHEMA extensions;
CREATE EXTENSION dblink SCHEMA extensions;
CREATE EXTENSION pg_profile SCHEMA extensions;

\c mydb
CREATE SCHEMA extensions;
CREATE EXTENSION dblink SCHEMA extensions;
CREATE EXTENSION pg_stat_statements SCHEMA extensions;
CREATE EXTENSION pg_stat_kcache schema extensions;
CREATE EXTENSION pg_wait_sampling schema extensions;
CREATE EXTENSION pg_profile SCHEMA extensions;
select extensions.set_server_size_sampling('local','23:00+01',interval 
'2 hour',interval '8 hour');


CREATE USER collector_user with password '';
GRANT pg_read_all_stats TO collector_user;
grant connect on database mydb to collector_user;
ALTER ROLE collector_user SET search_path = extensions,public;

The postgres user had the following crontab:
13,43 * * * * psql -d mydb -c 'SELECT extensions.take_sample()' > 
/dev/null 2>&1


I made sure autovacuum works correctly and after that, just to make 
sure, I run vacuum analyze on all DBs. This has not lead to any notable 
improvement in the latency as seen by the application.


I access pg_profile data from grafana, but I cannot see anything that 
looks wrong.

Overall Hit Ratio is always between 99.3% and 100%.
The only strange thing I see in grafana, are the following:

Overall statement times sais "No data" and 'db query error: pq: column 
"blk_read_time" does not exist'. Also the graph "Statement time" says 
"No data" and "db query error: pq: column sst.blk_read_time does not exist".


Wait event types (statements) graph: N/A was quite stable at about 0.200 
s/s until 2nd september and then grow regularly and sharply up 4 s/s 
until 22nd september when it dropped back to 0.7 s/s and is stable at 
that value till now.


Developers report that their software is already optimized and database 
size is not a problem. They suggest the issue might be inside PostgreSQL 
itself.


Could somebody help me understand what is causing the growing latency?

Also, but more related to pg_profile, what does the N/A data in the 
"Wait event types (statements)" graph represent?


Thank you and best regards.
Robi




Re: PostgreSQL 18 not available for Noble?

2025-10-17 Thread Ray O'Donnell



On 28/09/2025 22:17, Adrian Klaver wrote:

On 9/28/25 14:04, Ray O'Donnell wrote:


On 28 September 2025 21:54:01 Adrian Klaver 
 wrote:



On 9/28/25 13:46, Ray O'Donnell wrote:

Hi all,

As per $subject - is PG 18 not available for Ubuntu Noble (24.04)?


According to the below it is:

https://www.postgresql.org/download/linux/ubuntu/


Hi Adrian,

That's what I though, right enough.


I see it here:

https://apt.postgresql.org/pub/repos/apt/dists/noble-pgdg/

Check:

Are the PGDG repos set up in Tuxedo OS?

Or are is Tuxedo using the default Ubuntu repo, which is pinned at 
Postgres 16:


https://packages.ubuntu.com/noble/database/postgresql



Problem solved - it turned out I was doing something stupid - I hadn't 
downloaded the repository key from apt.postgresql.org. Once I did that, 
PG 18 installed without a hitch. I put it down to brain fog (I'm 
currently isolating with Covid)...


Thanks again for the help!

Ray.


--
Ray O'Donnell // Galway // Ireland
[email protected]





Re: Enquiry about TDE with PgSQL

2025-10-17 Thread Ron Johnson
On Fri, Oct 17, 2025 at 3:01 AM Laurenz Albe 
wrote:

> On Fri, 2025-10-17 at 00:49 -0400, Ron Johnson wrote:
> > On Thu, Oct 16, 2025 at 6:05 PM Greg Sabino Mullane 
> wrote:
> > >
> > > TDE, on the other hand, is a very complex and difficult thing to add
> into Postgres.
> >
> > TDE was added to SQL Server, with (to us, at least) minimally-noticed
> overhead.
> > Oracle has it, too, but I don't know the details.
> >
> > The bottom line is that requirements for TDE are escalating, whether you
> like it or
> > not, as Yet Another Layer Of Defense against hackers exfiltrating data,
> and then
> > threatening to leak it to the public.
>
> Bruce Momjian has interesting things to say about that in
> https://compiledconversations.com/6/ (unfortunately I don't remember where
> exactly in this 84 minute piece).
>
> It is a feature that users want (or need to comply with whatever they feel
> they have to comply with).  On the other hand, it has very limited
> technical
> or security value, which hampers its acceptance into core.
>

I gave you a reason: "Yet Another Layer Of Defense against hackers
exfiltrating data".  It's the same reason PgBackRest encrypts backups.

-- 
Death to , and butter sauce.
Don't boil me, I'm still alive.
 lobster!