On Fri, 2025-07-18 at 19:08 +0800, yexiu-glory wrote:
> I'm facing a problem here: our business requires logical data replication to
> other
> departments, but at the same time, sensitive fields need to be filtered out.
> Therefore, we used the column filtering function when creating logical
> re
On Fri, 2025-07-18 at 18:22 +0530, KK CHN wrote:
> I am getting error when using PgBouncer(1.23.1 ) with Postgres 16 (RedHAT
> 9.4)
>
> 2025-07-18 00:00:00 IST ERROR: prepared statement "S_243" does not exist
> 2025-07-18 00:00:03 IST ERROR: prepared statement "S_205" does not exist
> 2025-0
Make sure max_prepared_statements is set to nonzero in your config. See:
https://www.crunchydata.com/blog/prepared-statements-in-transaction-mode-for-pgbouncer
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
> The interesting thing is, a few searches about the performance return mostly
> negative impressions about their object storage in comparison to the original
> S3.
I think they had a rough start, but it's quite good now from what I've
experienced. It's also dirt-cheap, and they don't bill for
Thanks, I learned something else: I didn't know Hetzner offered S3
compatible storage.
The interesting thing is, a few searches about the performance return
mostly negative impressions about their object storage in comparison to the
original S3.
Finding out what kind of performance your benchmark
Hi ,
I am getting error when using PgBouncer(1.23.1 ) with Postgres 16
(RedHAT 9.4)
2025-07-18 00:00:00 IST ERROR: prepared statement "S_243" does not exist
2025-07-18 00:00:03 IST ERROR: prepared statement "S_205" does not exist
2025-07-18 00:00:03 IST ERROR: prepared statement "S_206" does
Now, I'm trying to understand how CAP theorem applies here. Traditional
PostgreSQL replication has clear CAP trade-offs - you choose between
consistency and availability during partitions.
But when PostgreSQL instances share storage rather than replicate:
- Consistency seems maintained (same dat
I'm facing a problem here: our business requires logical data replication to
other departments, but at the same time, sensitive fields need to be filtered
out. Therefore, we used the column filtering function when creating logical
replication. If we use `alter table table1 replica identity defau
Hi Seref,
For the benchmarks, I used Hetzner's cloud service with the following setup:
- A Hetzner s3 bucket in the FSN1 region
- A virtual machine of type ccx63 48 vCPU 192 GB memory
- 3 ZeroFS nbd devices (same s3 bucket)
- A ZFS stripped pool with the 3 devices
- 200GB zfs L2ARC
- Postgres con
Sorry, this was meant to go to the whole group:
Very interesting!. Great work. Can you clarify how exactly you're running
postgres in your tests? A specific AWS service? What's the test
infrastructure that sits above the file system?
On Thu, Jul 17, 2025 at 11:59 PM Pierre Barre wrote:
> Hi eve
Hi Laurenz,
> I think the biggest hurdle you will have to overcome is to
> convince notoriously paranoid DBAs that this tall stack
> provides reliable service, honors fsync() etc.
Indeed, but that doesn't have to be "sudden." I think we need to gain
confidence in the whole system gradually by st
11 matches
Mail list logo