Dear Abrahim
> The Citus extension package is installed, but it is not preload on
> shared_preload_libraries
> and citus extesion is not created.
It is possible that a shared library is loaded even if shared_preload is not set
and CREATE EXTENSION is not executed. Per my understanding the specif
On Fri, 11 Jul 2025 at 01:28, Tom Lane wrote:
>
> I think all you could do is monitor the pg_locks view and hope to
> catch the process in "waiting" state before it fails.
>
> It occurs to me to wonder though if we couldn't provide more
> context in the error about what lock is being waited for.
Having just received a shiny new dual CPU machine to use as a postgresql
server, I'm trying to do some reasonable efforts to configure it correctly.
The hard
ware has 128 cores, and I am running a VM with Redhat 9 and Postgresql
16.9.
In postgresql.conf I have:
max_worker_processes = 90
On Thu, Jul 10, 2025 at 5:48 AM Dominique Devienne
wrote:
> We store scientific information in PostgreSQL, and some of that is
> bytea and large, thus we must "chunk it" both for performance, and not
> be limited to 1GB (we do exceed that, in rare occasions).
>
> Recently I added md5/sha1 hashi
On Thu, Jul 10, 2025 at 10:58 AM Dimitrios Apostolou wrote:
> Can't find any related documentation, but I expect loss of "temp" space is
> of minor importance.
>
You might want to try finding some old discussions about why putting temp
tablespace on a RAM-drive is not a supported configuration.
Hello list,
I have a database split across many tablespaces, with temp_tablespaces
pointing to a separate, less reliable device (single local NVMe drive).
How dangerous is it for the cluster to be unrecoverable after a crash?
If the drive goes down and the database can't read/write to
temp_t
Thanks Hayato and Shlok, The Citus extension package is installed, but it is
not preload on shared_preload_libraries and citus extesion is not created.I
will create a new container without Citus extension package and adding stack
trace ( I think this is the one you're talking about) as soon as
On Thu, Jul 10, 2025 at 12:26 PM Adrian Klaver
wrote:
> On 7/10/25 04:48, Dominique Devienne wrote:
>
> > Seems so logical to me, that these hashing functions were available
> > are aggregates, I can't be the first one to think of that, can it?
> >
>
> I've been on this list since late 2002 and I
On 7/10/25 04:48, Dominique Devienne wrote:
Seems so logical to me, that these hashing functions were available
are aggregates, I can't be the first one to think of that, can it?
I've been on this list since late 2002 and I don't recall this ever
being brought up. Now it is entirely possible
Hi Laurenz,
Got it. I have only one suggestion for the patch. Consider adding a
corresponding test in src/bin/scripts/t/100_vacuumdb.pl.
Proposal (I used this to check the patch):
$node->safe_psql('postgres',
"CREATE TABLE parent_table (a INT) PARTITION BY LIST (a);\n"
. "CREATE TA
Steve Baldwin writes:
> I'm occasionally seeing a lock timeout in a commit statement. For example:
> 2025-07-10 08:56:07.225 UTC,"b2bc_api","b2bcreditonline",23592,"
> 10.124.230.241:60648",686f8022.5c28,55,"COMMIT",2025-07-10 08:56:02
> UTC,3984/10729,676737574,ERROR,55P03,"canceling statement d
Hi,
There could be many reasons , sharing most common ones:
1.
Typo errors ie case sensitive
2.
Password change errors
3.
Incorrect username ,database name , port number
4.
Configuration issues in hba file
5.
Privilege issue
6.
Server Not Restarted/Reloaded or full restart has not
Hi all,
I'm occasionally seeing a lock timeout in a commit statement. For example:
2025-07-10 08:56:07.225 UTC,"b2bc_api","b2bcreditonline",23592,"
10.124.230.241:60648",686f8022.5c28,55,"COMMIT",2025-07-10 08:56:02
UTC,3984/10729,676737574,ERROR,55P03,"canceling statement due to lock
timeout",,,
We store scientific information in PostgreSQL, and some of that is
bytea and large, thus we must "chunk it" both for performance, and not
be limited to 1GB (we do exceed that, in rare occasions).
Recently I added md5/sha1 hashing support for such values (for various
reasons, to track corruptions i
14 matches
Mail list logo