>
> if we have any column with large string/text values and we want it to be
> indexed then there is no choice but to go for a hash index. Please correct
> me if I'm wrong.
>
There are other strategies / solutions, but we would need to learn more
about your use case.
Cheers,
Greg
On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne
wrote:
> composite key (placedon,id)
> In concurrent mode if i use id at where clause then query plan for that id
> column changes
>
> How to mitigate it rather than use seperate index for id to continue
> without change in query plan (index scan)
Adrian and Veem were saying:
> > so does it mean that we should increase the pg_stat_statement.max to
> further higher value?
>
Yes, you should raise this setting if you find your queries are getting
pushed out. Moving to version 17 will also help, as myself and others have
been working on norma
On Sat, Oct 5, 2024 at 5:03 PM Tom Lane wrote:
> As I mentioned upthread, we currently promise that xact_start matches the
> query_start of the transaction's first statement. (I'm not sure
> how well that's documented, but the code goes out of its way to make it
> so, so somebody thought it was
While working on a doc patch for this, I realized that the situation is
worse than I originally thought. This means that anyone relying on
pg_stat_activity.xact_start is not really seeing the time of the snapshot.
They are seeing the time that BEGIN was issued. Further, there is no way to
tell (AFA
To be frank, there is so much wrong with this query that it is hard to know
where to start. But a few top items:
* Make sure all of the tables involved have been analyzed. You might want
to bump default_statistics_target up and see if that helps.
* As mentioned already, increase work_mem, as you
>
> Additionally in the plan which mysql makes and showing the highest
> response time, is it suffering because of differences of the speed of the
> underlying IO/storage or is it just because of the optimization features
> which are available in postgres and not there in mysql ? Trying to
> unders
Start by seeing where the port is actually being set by running this:
select setting, source, sourcefile, sourceline from pg_settings where name
= 'port';
Cheers,
Greg
I'd echo the suggestion to strace this. You can use the pre_auth_delay
setting to help facilitate that. See:
https://www.postgresql.org/docs/current/runtime-config-developer.html
Cheers,
Greg
On Fri, Nov 1, 2024 at 2:40 AM Muhammad Usman Khan
wrote:
> For immediate space, move older files from pg_Wal to another storage but
> don't delete them.
>
No, do not do this! Figure out why WAL is not getting removed by Postgres
and let it do its job once fixed. Please recall the original poste
Some of those requirements are vague, but yes, Patroni should probably be
the first approach you look at. If the second datacenter is just for
redundancy, then a simple setup would be:
DCA (data center A):
Postgres server 1
Postgres server 2
DCB:
Postgres server 3 (set no_failover: true)
You wil
>
> Yes. Looks like the DBD::Pg module is what needs the upgrade.
>
Specifically, you will need DBD::Pg version 3.9.0 or higher, but I *highly*
recommend using the latest release you can (right now, that is 3.18.0)
Cheers,
Greg
What problem are you trying to solve? If you tell us that, we can guide you
to some better solutions.
There are numerous issues here, but the most important are:
1) Calling a shell via \! invokes an entirely new process: there is no link
to the parent or grandparent process
2) The run-bash-funct
On Mon, Sep 23, 2024 at 8:22 AM Asad Ali wrote:
> There is no direct prompt escape sequence like %n for displaying the
> current_role in the psql prompt. However, you can work around this by using
> a \set command to define a custom prompt that includes the result of
> current_role.
>
Please do
You may be able to solve this with advisory locks. In particular,
transaction-level advisory locks with the "try-pass/fail" variant. Here,
"123" is a unique number used by your app, related to this particular
table. You also need to force read committed mode, as the advisory locks go
away after the
On Wed, Sep 25, 2024 at 2:13 AM KK CHN wrote:
> PgBackRest : I tried to restore the latest backup taken at my RepoServer
> to a testing EPAS server freshly deployed .
> ...
>
Now I comment out the archive command in the test EPAS server
> postgresql.conf and started the EPAS server.
>
* To d
On Wed, Sep 25, 2024 at 4:54 PM Christophe Pettus wrote:
> On Sep 25, 2024, at 13:49, Greg Sabino Mullane wrote:
> > BEGIN ISOLATION MODE REPEATABLE READ SNAPSHOT NOW;
>
> This might well be a failure of imagination on my part, but when would it
> pragmatically matter th
>
> Since transactions should be "as short as possible, without being too
>> short", how much time is there between when you run "BEGIN;" and the first
>> "work statement"?
>>
>
I don't know that it really matters. For something automated, it would be a
few milliseconds. Either way, I'm sure most p
On Wed, Sep 25, 2024 at 1:53 PM Tom Lane wrote:
> Because we're not going to analyze the statement in the amount of depth
> needed to make that distinction before we crank up the
> transactional machinery. If it says SELECT, it gets a snapshot.
>
Ok, thanks. So to the original poster's point, p
Please do not spam the mailing lists with requests for follow ups. In this
particular case, you received an answer two days after you posted it. It
went to both psql-general, pgsql-in-general, and to you directly, so there
seems little excuse for missing it.
Also note that pglogical is a third-par
On Tue, Oct 1, 2024 at 1:57 AM sud wrote:
> *Where are you getting the ~2000 count from?*
> Seeing this in the "performance insights" dashboard and also its matching
> when I query the count of sessions from pg_stat_activity.
>
So I'm guessing this is perhaps RDS or Aurora? Stating that up fron
On Tue, Oct 1, 2024 at 9:53 AM Pecsök Ján wrote:
> We see significant difference in explain analyze Actual time in the first
> line of execution plan and Execution time in the last line of execution
> plan. What can be the reason?
>
>
>
> For example, first line of execution plan:
>
> Gather (co
On Tue, Oct 1, 2024 at 2:52 AM yudhi s wrote:
> When I execute the query with explain (analyze, buffers),I see the section
> below in the plan having "sort method" information in three places
> each showing ~75MB size, which if combined is coming <250MB. So , does that
> mean it's enough to set t
On Thu, Sep 19, 2024 at 5:17 AM veem v wrote:
> 2024-09-18 17:05:56 UTC:100.72.10.66(54582):USER1@TRANDB:[14537]:DETAIL:
> Process 14537 waits for ShareLock on transaction 220975629; blocked by
> process 14548.
>
You need to find out exactly what commands, and in what order, all these
processes
On Tue, Sep 24, 2024 at 10:28 AM Tom Lane wrote:
> It's even looser than that, really: it's the first statement that
> requires an MVCC snapshot.
Hmso why does "SELECT 1;" work as a transaction start marker then, as
opposed to "SHOW work_mem;", which does not? Do we simply consider anything
On Sat, Sep 21, 2024 at 3:08 PM Ramakrishna m wrote:
> I would greatly appreciate any suggestions you may have to help avoid
> logical replication delays, whether through tuning database or operating
> system parameters, or any other recommendations
>
In addition to the things already answered:
As far as the application being able to change those fields itself, you can
prevent that via column permissions, by leaving out the four audit columns
and doing something like:
GRANT INSERT (email, widget_count), UPDATE (email, widget_count) ON TABLE
foobar TO PUBLIC;
That way, inserts are guaran
On Sat, Nov 23, 2024 at 1:10 PM Bruce Momjian wrote:
> and say bounce the database server and install the binaries. What I
> have never considered before, and I should have, is the complexity of
> doing this for many remote servers. Can we improve our guidance for
> these cases?
>
Hmm I'm not
On Mon, Nov 11, 2024 at 6:23 AM David Lynam
wrote:
> Are there any plans or discussions about adding native support for
> SQL:2011 temporal tables, so we don’t need extensions?
No concrete plans I've heard of (but see below). For the record, "so we
don't need extensions" is not a winning argume
On Wed, Dec 4, 2024 at 7:42 AM Bharani SV-forum
wrote:
> a) is the above said steps is correct with the given existing and proposed
> setup
>
No. Here are some steps:
* Install Postgres on the new VM
However you get it, use the newest version you can. As of this writing, it
is Postgres 17.2. Ve
On Mon, Dec 9, 2024 at 6:43 AM Joe Wildish wrote:
Overall, your solution seems okay, but:
> a fix has gone in to PG17 that sorts this problem.
>
> However, we can't go to 17 yet, so need a solution for 15 and 16.
Honestly, this would seem like a really, really strong reason to push for
v17.
On Tue, Feb 4, 2025 at 1:50 PM Ayush Vatsa
wrote:
> Also, what would be the best way to ensure that, by default, no roles
> (except the function owner) have any privileges on new functions created in
> my protected schema?
>
Create them in another schema altogether, then move it to my_schema once
I'm not sure why we are focused on the other errors - the database fails to
get dropped (or created), so future errors are to be expected.
pg_restore should be run with the --exit-on-error flag, and handle the
errors one by one as someone mentioned upthread.
I would use the --section=pre-data --s
On Wed, Dec 11, 2024 at 1:44 PM Ron Johnson wrote:
> Isn't this a pretty common password setting? I know that for at least 35
> years, and going back to the VAX/VMS days I've been locked out for X hours
> if I typed an invalid password. Same on Windows and I think also Linux
> (though ssh publ
On Wed, Dec 11, 2024 at 5:46 AM 張宸瑋 wrote:
> In the use of the Credcheck suite, the parameter
> "credcheck.max_auth_failure = '3'" is set in the postgresql.conf file to
> limit users from entering incorrect passwords more than three times, after
> which their account will be locked.
>
Won't that
On Mon, Dec 16, 2024 at 5:32 AM 張宸瑋 wrote:
> We have both regular accounts and system accounts. For regular accounts,
> we still require password complexity and the lockout functionality after
> multiple failed login attempts.
>
Again, what is the threat model here? Most people have their passwo
As a superuser, rename pg_stat_reset inside one of the commonly affected
databases:
alter function pg_stat_reset rename to
hey_stop_running_pg_stat_reset_already;
Then see who starts complaining. Additionally, your server log will get
helpful entries like this:
ERROR: function pg_stat_reset() d
On Wed, Jan 22, 2025 at 2:00 AM Runxi Yu wrote:
> I therefore propose a feature, to be able to specify in a table schema
> that a row should be deleted if orphaned.
>
I think you mean "childless" rows, as "orphaned" has a different meaning
traditionally.
When and how would this deletion take pl
You might also look into using a bitmap, for some or all of those fields.
It depends on how many distinct values each can have, of course, and also
on how exactly they are accessed, but bitmaps can save you quite a bit of
space.
Cheers,
Greg
On Fri, Feb 14, 2025 at 1:02 AM Michał Kłeczek wrote:
> Create index concurrently and then fiddle with the catalog tables to
> define the constraint using this index?
>
You mean an ALTER TABLE ... ADD CONSTRAINT ... EXCLUDE without actually
doing an ALTER TABLE. Nope, that's far worse than the p
On Wed, Feb 12, 2025 at 9:02 PM Marcelo Fernandes
wrote:
> What I am after is the same, but I seek a deeper understanding of what it
> does, and why it does it. For example, it swaps relfilenode. Why?
It is surgically replacing all pointers to the old data with pointers to
the new data. Yes, wi
On Sun, Feb 16, 2025 at 5:58 PM Marcelo Fernandes
wrote:
> - The foreign keys are not being updated to point to the new table.
>
You started out okay with your test script, but the pg_depend bit needs
work. I would recommend examining that table closely until you have a
really good understanding
On Wed, Mar 19, 2025 at 10:02 AM E-BLOKOS wrote:
> is it possible a crash happened with a VACUUM and a machine reboot in same
> time?
>
More likely to be a problem with pg_repack. Please tell us the exact
versions of pg_repack and Postgres in use here.
Cheers,
Greg
--
Crunchy Data - https://w
First figure out which database is having that issue, by using pg_dump
--schema-only on each database in turn. Then run this SQL on the database
giving the error to see if the type exists, or what is nearby:
select oid, typname, typtype, typnamespace::regnamespace from pg_type where
oid <= 794978
On Thu, Mar 13, 2025 at 11:49 PM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:
> To return one row takes 43ms is not optimal
>
It's actually much faster than that, but even 43ms is overall good. The
query is already pretty optimal, as it uses a single index only scan. There
are a few tric
On Mon, Apr 7, 2025 at 5:32 AM KK CHN wrote:
> *ERROR: [082]: WAL segment 000101EB00*4B was not archived
> before the 6ms timeout
>
This is the part you need to focus on. Look at your Postgres logs and find
out why the archiver is failing. You can also test this without trying a
On Tue, Apr 15, 2025 at 7:20 AM Karsten Hilbert
wrote:
> > The survey [...] is completely anonymous.
>
> Nope.
>
Can you elaborate on this, please, for my sake and others?
On Wed, Apr 30, 2025 at 7:15 AM Luca Ferrari wrote:
> Rationale: I'm using ansible to configure, thru different steps,
> PostgreSQL instances and I would like to have every part to append its
> configuration on the previous one.
>
Ansible is good for bringing your systems to a known consistent s
On Thu, May 1, 2025 at 9:08 AM Abdul Sayeed wrote:
> Hello,
>
> This warning message indicates that a *resumable backup*,
>
(plus lots more ChatGPT crap)
Please do not use LLMs to answer questions here. If the original poster
wanted that, they could have done it themselves. Further, you are poll
On Fri, Apr 18, 2025 at 5:18 PM Jeremy Schneider
wrote:
> but i wanted to confirm with someone - my easiest options for online
> verifying checksums would be to trigger pg_basebackup then check
> pg_stat_database, or to install michael's utility?
>
Your easiest option is to just use pgbackrest,
On Mon, Apr 21, 2025 at 9:03 AM KK CHN wrote:
>
> ERROR: [082]: WAL segment 000102200038 was not archived before
> the 6ms timeout
>
...
> How can I make the full backup command not to check the WAL was archived
> or not to the repo server for atleast once ?
>
You cannot. WAL
On Wed, May 7, 2025 at 7:15 AM KK CHN wrote:
> *archive_command = 'pgbackrest --stanza=My_Repo archive-push %p && cp %p
> /data/archive/%f' *
>
Don't do this. You are archiving twice, and worse, the first part is using
async archiving. Remove that whole "cp" part. Once that is fixed, run:
pgbac
On Mon, Feb 17, 2025 at 2:55 AM 馬 騰飛 wrote:
> Interestingly, when we isolate the problematic SQL statement and replace
> its parameters with actual values, it executes in just a few seconds in
> pgAdmin.
> However, when we run the same SQL query through our application using
> Npgsql, it takes ov
Another variation:
select avg(max) from (select distinct max(val) over(partition by id) from
mytable);
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
Not related to the main question, but that query could use a little
adjustment. Something like:
WITH x AS (
select kelt from javaink_forgalma
where en_kaptam is true and az_aru_neve = 'nyugdíjam'
order by kelt desc limit 2
)
,y AS (select min(kelt) from x)
,z AS (select max(kelt) from x)
INS
Take a look at the pg_stat_archiver view, if you have not already:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ARCHIVER-VIEW
So I want to check how many wal file got created in x minutes and how many
> .ready files got created in those x minutes.
>
It's not c
On Fri, Mar 7, 2025 at 9:35 AM mark bradley wrote:
> This is what MS Copilot has to say about this apparent bug where Postgres
> inserts extra rows violating a primary keys uniqueness constraint:
>
> Yes, this issue has been encountered by others. There are a few potential
> reasons why this migh
CVE-2025-1094 has a narrow blast radius. If you are not directly affected,
I would focus your efforts on getting to 17. But the lack of an existing
process to smoothly upgrade minor revisions is worrying and something that
needs to get addressed as well.
Cheers,
Greg
--
Crunchy Data - https://www
On Thu, Mar 6, 2025 at 6:49 AM chandan Kumar
wrote:
> need any correction or advise.
>
Honestly, this all seems overly complex and fragile. I'm not sure what the
overall goal is, but if it's to have a general PITR solution, use
pgBackRest. If it's just to have a fall back method for a particular
On Wed, Mar 5, 2025 at 9:44 PM me nefcanto wrote:
> Anyway, that's why I asked you guys. However, encouraging me to go back to
> monolith without giving solutions on how to scale, is not helping.
>
We did. In addition to the ongoing FDW discussion, I mentioned read-only
replicas and Citus. As fa
A reindex is not going to remove rows from the table, so we need to see how
you came to the conclusion that it did.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
On Tue, Mar 11, 2025 at 10:29 AM mark bradley
wrote:
> An "interesting" effect of reindexing is that all the records that were
> dups in the nodes table were deleted, both copies.
>
Er...that's not just interesting, but alarming - if true. Can you show the
steps you took?
Cheers,
Greg
--
Crunc
On Tue, Mar 11, 2025 at 3:06 AM Achilleas Mantzios asked:
> is it still harder than the trigger ?
>
I think the trigger wins: no extension needed, arguably better error
output, easier to understand at a glance, and can quickly change the
business logic by adjusting the function. Pretty short too.
On Thu, Feb 27, 2025 at 1:32 PM Tom Lane wrote:
> > -c 'ALTER USER timeshift_user PASSWORD '"'"'timeshift_pass'"'"';'
> > I am still trying to work out what that quoting is doing?
>
That's standard for -x output for some versions of bash. FWIW, none of the
shells I had access to output it quite
That's harmless, it is the walsender process, and it is meant to be
long-running. You can modify your query and add this:
AND backend_type = 'client backend'
to filter out any background processes.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products
FYI the -bugs thread in question:
https://www.postgresql.org/message-id/flat/CAEHBEOBCweDWGNHDaUk4%3D10HG0QXXJJAGXbEnFLMB30M%2BQw%2Bdg%40mail.gmail.com
seems to imply the primary blocker was a unique constraint.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Soft
On Tue, Mar 11, 2025 at 9:48 PM Siraj G wrote:
> What are the features available in Postgresql to hide PII (personal
> identifiable information) from the Admin team?
>
Can you explain your threat model here, and who exactly the "Admin team" is
and what access they have? As a general rule of thum
On Wed, Mar 5, 2025 at 10:22 AM Marc Millas wrote:
> Then the flow contains a single line: \. to my understanding this means
> end of the copy
>
> but, Postgres generates an error : invalid input syntax for type numeric
> "\."
>
This can happen when you send TWO backslashes and a dot, rather tha
On Wed, Mar 5, 2025 at 7:15 AM me nefcanto wrote:
> I think if we put all databases into one database, then we have blocked
> our growth in the future.
>
I think this is premature optimization. Your products table has 100,000
rows. That's very tiny for the year 2025. Try putting everything on on
On Tue, Mar 11, 2025 at 2:35 PM Paul Foerster
wrote:
> The question was a bit of an idea. So the glibc version in not known yet,
> but I'm highly confident that they will differ. A reindex could in theory
> be possible in most cases, but is a definite show stopper on some of our
> databases, beca
On Thu, Feb 13, 2025 at 6:06 PM Marcelo Fernandes
wrote:
> > It's technically possible to do something similar for your use case, but
> it's not trivial. All the cab to trailer wires must be precisely changed.
> Everything directly related to the data must be swapped: heap, indexes,
> toast.
>
>
On Fri, Feb 14, 2025 at 12:41 AM Laurenz Albe
wrote:
> Moreover, you have to make sure to send out invalidation messages so that
> every session that caches statistics or
> execution plans for the tables discards them.
Hmm...is that really necessary? Because if so, there is no direct SQL-level
On Sat, Feb 22, 2025 at 8:58 PM Tobias McNulty
wrote:
> "Without this option a refresh which affects a lot of rows will tend to
> use fewer resources"
...
> either that (1) the refresh operation actually updates the contents of a
> lot of rows in the materialized view
This is the correct inte
On Mon, Feb 24, 2025 at 4:46 AM Dominique Devienne
wrote:
> But now we have a new requirement, for "fuzzy find". I.e. the client can
> ask for names
> which are not the exact in-DB names, but also aliases of those names.
>
...
> join unnest($3::text[]) with ordinality as aliases(name, ord) on c.
On Mon, Feb 24, 2025 at 11:50 AM Dominique Devienne
wrote:
> We lookup whether there's a list of aliases for "Allison". If there are,
> we send them in $3 as an array
of string (e.g. ['All', 'Alli', ...], and the first one matching (thanks to
> order by ord limit 1) is returned, if any.
>
Thank
Since you are willing to break the all one transaction rule, and if the
restores were created via pg_dump, you could use the --section argument to
split things up, run the "pre-data" sections serially, and the rest ("data"
and "post-data") concurrently.
--
Cheers,
Greg
--
Crunchy Data - https://
On Wed, Feb 26, 2025 at 2:21 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> issue in PostgreSQL 15.7
>
Still missing a ton of bug fixes - Postgres 15 is on version 15.12. Try to
get that upgraded.
> We got the following error
> ERROR: SSL error: bad length
> SSL SYSCALL error: No
On Tue, Feb 25, 2025 at 7:03 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> 1. Why is corruption happening? How to find out the reason?
>
It sounds more like user error than actual corruption, but without more
data we cannot determine. Show us the exact commands you ran, along wit
On Tue, Feb 25, 2025 at 8:51 AM wrote:
> Does this sound familiar anyone? Any ideas what might prompt Postgres to
> stop?
>
Postgres needs full read and write access to its own data directory. If it
doesn't, it will eventually PANIC, as your logs show.
See if you can figure out what else happen
Kudos to Alicja for that excellent answer.
It would be nice if the Postgres log message was a little more descriptive.
(throws it onto my huge pile of "future maybe enhancement ideas")
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Suppor
On Mon, Jun 23, 2025 at 2:45 PM raphi wrote:
> As of now though we cannot use PG for any PCI/DSS certified application
> because we can't enforce either complexity nor regular password changes,
>
You can, and many, many companies do, but you need a modern auth system
like Kerberos. Even if we we
Basic debugging:
Try looking at each node more closely with pcp_node_info
https://www.pgpool.net/docs/latest/en/html/pcp-node-info.html
Also make sure you are always using health_check
https://www.pgpool.net/docs/latest/en/html/runtime-config-health-check.html
Specific debugging:
2025-06-19 1
Big -1. This is an application problem. Make the application smarter, not
the parser dumber. Prepared statements have been around a long, long time.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
On Wed, Jul 9, 2025 at 9:57 AM Alpaslan AKDAĞ
wrote:
> Is it expected behavior that users created with scram-sha-256 passwords
> can still connect via md5 in pg_hba.conf?
Yes. From the docs:
> To ease transition from the md5 method to the newer SCRAM method, if md5 is
> specified as a method i
> Best solution: Upgrade everyone to scram, then change md5 to scram in
> pg_hba.conf and never look back.
>
To expand more on the "upgrade everyone to scram", that means force all
users to set a new password while using scram (which should be the
default). You can do it yourself by getting a list
Looks like someone testing out the fake Postgres CVE 2019-9193
https://nvd.nist.gov/vuln/detail/CVE-2019-9193
See for example:
https://packetstorm.news/files/id/166540
But certainly the first step is finding out who or what is running this.
Cheers,
Greg
You might want to examine the SKIP LOCKED feature as well, if you are using
this query to have multiple workers grab chunks of the table to work on
concurrently.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
On Wed, Jul 16, 2025 at 9:25 AM Amol Inamdar wrote:
>
>1. NFS mount point is for /nfs-mount/postgres (and permissions locked
>down so that Postgres cannot create directories in here)
>2. Postgres data directory is /nfs-mount/postgres/db
>3.
>
>With secured NFS + AT-TLS setup P
>
> It would be really handy to know that `$1` is being used as a non nullable
> value
To push back in a different way, how exactly would this be useful?
The system catalogs are the best place to get all the various information
about a relation, including any and all constraints. Data type info
Make sure max_prepared_statements is set to nonzero in your config. See:
https://www.crunchydata.com/blog/prepared-statements-in-transaction-mode-for-pgbouncer
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
>
> Is this architecture considered a best practice within the PostgreSQL
> community?
I would say Patroni is probably "best practice", but there are other people
out there happy with, and expert with, repmgr as well.
Are there any potential bottlenecks or failure points I should be aware of?
101 - 191 of 191 matches
Mail list logo