Re: Feature request: Settings to disable comments and multiple statements in a connection
On Thu, Jun 5, 2025 at 1:06 AM Tom Lane wrote: > > Provide a client connection option (and/or implement the backend support) > > to allow only one statement in an execute request > > This exists already; you just have to use the extended query protocol. Hi Tom. Can you be more specific please? In the context of LibPQ, in case it matters. TIA, --DD
Re: Feature request: Settings to disable comments and multiple statements in a connection
On Tue, 2025-06-10 at 10:02 +0200, Dominique Devienne wrote: > On Thu, Jun 5, 2025 at 1:06 AM Tom Lane wrote: > > > Provide a client connection option (and/or implement the backend support) > > > to allow only one statement in an execute request > > > > This exists already; you just have to use the extended query protocol. > > Hi Tom. Can you be more specific please? > In the context of LibPQ, in case it matters. PQexecParams() Yours, Laurenz Albe
Re: pg_upgradecluster version 10 to 16 question
On 6/10/25 13:01, Jim Cunning wrote: I had a Ubuntu 20.04 LTS system with a postgresql version 10 database, and the system became unbootable. The database was stored in /var/lib/postgresql/10/main and I was able to recover the entire contents of the data in that directory and below. I now have a system running ubuntu 24.04 LTS with postgresql version 16. I obviously no longer have a running version 10 server, so I cannot use pg_dump to retrieve the data. I have placed the recovered version files in /var/lib/postgresql/10/main on my new system, alongside the /var/lib/postgresql/16/main that was created when I installed postgresql. I have tried several iterations of the pg_upgradecluster command, a Python script provided by Ubuntu, but it says it cannot find the database files. I also see from the postgresql wiki pages that there is a pg_upgrade utility program, but cannot find in any of the Ubuntu-provided installation files. Can anyone provide some guidance on how I might proceed? 1) Do: man pg_upgradecluster 2) Assuming you used the PGDG repos to install Postgres you can do: sudo apt install postgresql-10 to get a Postgres 10 install. I would move the /var/lib/postgresql/10/main you created out of the way and then move the contents back in to the installed version. Thanks in advance. -- Adrian Klaver adrian.kla...@aklaver.com
Re: pg_upgradecluster version 10 to 16 question
On Tue, Jun 10, 2025 at 1:01 PM Jim Cunning wrote: > I obviously no longer have a running version 10 server, > You will need to correct this lack if you want to make use of version 10 data files. David J.
Re: pg_upgradecluster version 10 to 16 question
On 6/10/25 13:14, Adrian Klaver wrote: On 6/10/25 13:01, Jim Cunning wrote: I had a Ubuntu 20.04 LTS system with a postgresql version 10 database, and the system became unbootable. The database was stored in /var/lib/postgresql/10/main and I was able to recover the entire contents of the data in that directory and below. I now have a system running ubuntu 24.04 LTS with postgresql version 16. I obviously no longer have a running version 10 server, so I cannot use pg_dump to retrieve the data. I have placed the recovered version files in /var/lib/postgresql/10/main on my new system, alongside the /var/lib/postgresql/16/main that was created when I installed postgresql. I have tried several iterations of the pg_upgradecluster command, a Python script provided by Ubuntu, but it says it cannot find the database files. I also see from the postgresql wiki pages that there is a pg_upgrade utility program, but cannot find in any of the Ubuntu-provided installation files. Can anyone provide some guidance on how I might proceed? 1) Do: man pg_upgradecluster 2) Assuming you used the PGDG repos to install Postgres you can do: sudo apt install postgresql-10 to get a Postgres 10 install. I would move the /var/lib/postgresql/10/main you created out of the way and then move the contents back in to the installed version. Forgot to say, be sure and retain a copy of the original 10 $DATADIR somewhere where you can fetch it should the above not work as planned. Thanks in advance. -- Adrian Klaver adrian.kla...@aklaver.com
Re: Is it correct to raise an exception in a domain check
On Tue, Jun 10, 2025 at 4:53 PM Logan Grosz wrote: > Hi, > > I have a `DOMAIN` on `JSONB`. I need to verify the shape of the JSON, so I > have a check expression written in PL/pgSQL. The docs say > > > Each constraint must be an expression producing a Boolean result > > Would it be correct to raise an exception to indicate a failed check? I > perform type-casts (that throw already) and other checks that would benefit > from attaching a description to the client error. > Since PG will throw an error when any CHECK constraint fails, are you hoping to provide more details by throwing your own exception? -- Death to , and butter sauce. Don't boil me, I'm still alive. lobster!
RE: Logical Replication Memory Allocation Error - "invalid memory alloc request size"
Dear Max, Thanks for the report. > The initial snapshot and data copy complete successfully for all tables. > However, anywhere from 5 > minutes to 2 hours after the initial sync, the subscription consistently > fails with memory allocation errors like: > > ``` > 2025-06-10 14:14:56.800 UTC [299] ERROR: could not receive data from WAL > stream: ERROR: invalid memory alloc request size 1238451248 > 2025-06-10 14:14:56.805 UTC [1] LOG: background worker "logical replication > worker" (PID 299) exited with exit code 1 > ``` I think this is a known postgres bug which has been also reported at [1]. We are discussing how we fix. Typically this can happen when there are lots of concurrent transactions and they have DDLs. IIUC there are no good workaround for now - any parameters can't avoid the failure. Only you can reduce them. I'm happy if you apply the patch posted at [1] and confirms the issue can be solved, but... seems difficult because you are in the managed env. [1]: https://www.postgresql.org/message-id/CALDaNm0TaTPuza7Fa%2BDRMzL%2BmqK3%2B7RVEvFiRoDJbU2vkJESwg%40mail.gmail.com Best regards, Hayato Kuroda FUJITSU LIMITED
pg_upgradecluster version 10 to 16 question
I had a Ubuntu 20.04 LTS system with a postgresql version 10 database, and the system became unbootable. The database was stored in /var/lib/postgresql/10/main and I was able to recover the entire contents of the data in that directory and below. I now have a system running ubuntu 24.04 LTS with postgresql version 16. I obviously no longer have a running version 10 server, so I cannot use pg_dump to retrieve the data. I have placed the recovered version files in /var/lib/postgresql/10/main on my new system, alongside the /var/lib/postgresql/16/main that was created when I installed postgresql. I have tried several iterations of the pg_upgradecluster command, a Python script provided by Ubuntu, but it says it cannot find the database files. I also see from the postgresql wiki pages that there is a pg_upgrade utility program, but cannot find in any of the Ubuntu-provided installation files. Can anyone provide some guidance on how I might proceed? Thanks in advance.
Re: Is it correct to raise an exception in a domain check
Logan Grosz writes: > I have a `DOMAIN` on `JSONB`. I need to verify the shape of the JSON, so I > have a check expression written in PL/pgSQL. The docs say >> Each constraint must be an expression producing a Boolean result > Would it be correct to raise an exception to indicate a failed > check? It'd be better just to have the CHECK expression return false. Admittedly, that's usually just going to end in an exception, but pre-judging that inside the expression doesn't seem ideal. An example of why not is that you'd break "soft" input error handling in COPY. As you say, there are some cases where it's hard to avoid an exception, but I'm not sure that "better error message" is a good justification for throwing one. Still, in the end it's your own judgment to make. regards, tom lane
is pg_stat_activity "transactional"? How fast does it update?
Hi. We're getting unit-test failures that look like data-races, which somehow are getting very frequent recently, tripping our CI. Basically we have a `services` table, for service discovery, which records the backend PID of its main DB Connection. Such that we can account for "stale" / "zombie" services, which are still registered (e.g. crashed), but who's connection is gone, when querying that `services` table, like so: ```sql select s.name, s.backend_pid, ... from services s join pg_stat_activity a on a.pid = s.backend_pid where a.datname = current_database() ``` The unit-test code is "linear", i.e. single-threaded, with pseudo-code: 1) Open Connection outer 2) Open Connection inner 3) register service using inner (i.e. add row in `services` with backend PID of inner) 4) Close Connection inner (calls PQfinish) 5) Run query above on outer. Sometimes still sees that "stale" row, despite closing inner. There's is no question about the ordering above. i.e. PQfinish(inner) returned before the query is run. there's of course the possibility of another (3rd) connection reusing the same backend PID, but that's remote a chance I believe. And I don't know of any other reliable ID for a connection, than its backend PID. So when and how fast does pg_stat_activity update, in the face of a PQfinish? What other scheme to detect "table rows" associated to "stale" connections? Thanks for any insights, --DD
Re: is pg_stat_activity "transactional"? How fast does it update?
Dominique Devienne writes: > So when and how fast does pg_stat_activity update, in the face of a > PQfinish? The shared state underlying pg_stat_activity is updated immediately when something changes. However ... PQfinish is asynchronous: it sends a "goodbye" message and then closes the connection without waiting for a response. So depending on system load, the associated backend could still be alive for some milliseconds. There might be other client libraries that do that differently. Another thing that might be biting you is that a backend inspecting pg_stat_activity() takes a snapshot of the view's contents and then holds onto that snapshot until end of transaction. You can get around that with pg_stat_clear_snapshot(). regards, tom lane
Is it correct to raise an exception in a domain check
Hi, I have a `DOMAIN` on `JSONB`. I need to verify the shape of the JSON, so I have a check expression written in PL/pgSQL. The docs say > Each constraint must be an expression producing a Boolean result Would it be correct to raise an exception to indicate a failed check? I perform type-casts (that throw already) and other checks that would benefit from attaching a description to the client error. Thank you, Logan
Re: is pg_stat_activity "transactional"? How fast does it update?
On 6/10/25 08:46, Dominique Devienne wrote: On Tue, Jun 10, 2025 at 4:27 PM Tom Lane wrote: Dominique Devienne writes: So when and how fast does pg_stat_activity update, in the face of a PQfinish? The shared state underlying pg_stat_activity is updated immediately when something changes. However ... PQfinish is asynchronous: it sends a "goodbye" message and then closes the connection without waiting for a response. So depending on system load, the associated backend could still be alive for some milliseconds. There might be other client libraries that do that differently. Another thing that might be biting you is that a backend inspecting pg_stat_activity() takes a snapshot of the view's contents and then holds onto that snapshot until end of transaction. You can get around that with pg_stat_clear_snapshot(). Many thanks Tom. Didn't know either of those facts. --DD For more detail see: https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS -- Adrian Klaver adrian.kla...@aklaver.com
Logical Replication Memory Allocation Error - "invalid memory alloc request size"
Hello, I'm encountering a consistent issue with PostgreSQL 15 logical replication and would appreciate any guidance on debugging or resolving this problem. *Setup:* - Source: PostgreSQL 15.x - Target: PostgreSQL 15.x - Replication: Logical replication using publication/subscription (pgoutput) - Tables: 3 tables (details below) *Table Details:* - Table 1: ~1,300 records, 7 columns, no large objects - Table 2: ~100,000 records, 7 columns, no large objects - Table 3: ~100,000 records, 17 columns, no large objects *Problem:* The initial snapshot and data copy complete successfully for all tables. However, anywhere from 5 minutes to 2 hours after the initial sync, the subscription consistently fails with memory allocation errors like: ``` 2025-06-10 14:14:56.800 UTC [299] ERROR: could not receive data from WAL stream: ERROR: invalid memory alloc request size 1238451248 2025-06-10 14:14:56.805 UTC [1] LOG: background worker "logical replication worker" (PID 299) exited with exit code 1 ``` This occurs whether I replicate all 3 tables together or individually. My initial hypothesis is that large transactions are creating WAL segments that exceed memory limits when sent to the subscriber. However, I haven't been able to confirm this / find the cause. *Questions:* 1. What's the best approach to debug this memory allocation issue? 2. Are there specific PostgreSQL settings I should check ? 3. How can I identify if large transactions are indeed the root cause? *Additional Context:* - This happens consistently across multiple replication attempts - The error size varies but is always requesting > 1GB - No custom logical replication settings currently applied - Subscriber machine has 256 GB of RAM and Ubuntu 20.04 - Can recreate it on different machines I should also mention that we're operating in a managed environment on DigitalOcean, which means we don't have direct access to the WAL logs on the publisher node. This is why the log information above is limited. I understand this constraint makes it more difficult to provide help, but I would really appreciate any insights or suggestions you might have. Thanks, Max
Re: is pg_stat_activity "transactional"? How fast does it update?
On Tue, Jun 10, 2025 at 4:27 PM Tom Lane wrote: > Dominique Devienne writes: > > So when and how fast does pg_stat_activity update, in the face of a > > PQfinish? > > The shared state underlying pg_stat_activity is updated > immediately when something changes. However ... > > PQfinish is asynchronous: it sends a "goodbye" message and then > closes the connection without waiting for a response. So > depending on system load, the associated backend could still be > alive for some milliseconds. There might be other client > libraries that do that differently. > > Another thing that might be biting you is that a backend inspecting > pg_stat_activity() takes a snapshot of the view's contents and then > holds onto that snapshot until end of transaction. You can get > around that with pg_stat_clear_snapshot(). Many thanks Tom. Didn't know either of those facts. --DD