On 30/06/2021 12:44, damiano giuliani wrote:
Hi Guys,

sorry for bothering, unfortunally i was called for an issue related to a cluster i did months ago which was fully functional till last saturday.

looks some applications lost connection to the master losing some update/insert.

i found the cause into the logs, the psqld-monitor went timeout after 10000ms and the master resource been demote, the instance stopped and then promoted to master again, generating few seconds of disservices (no master during the described process)

i noticed a redundant info:
Update score of "ltaoperdbsXX" from 990 to 1000 because of a change in the replication lag
seems some kind of network lag?

the network should be 10gbs where both corosync and prod network insist.
netkwork bonding on all of the nodes.
PAF version resource-agents-paf-2.3.0-1.rhel7.noarch
Postgres psql (13.1)
pacemaker-1.1.23-1.el7.x86_64
pcs-0.9.169-3.el7.centos.x86_64

i attached the log could be useful to dig further.
Can some guys point me on the right direction, should be really appreciate.

thanks for the support
Pepe

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
In my case, CentOS Stream, PAF broke(working cluster) after pcs(+other) packages updates. There, on CentOS, it should be easy to reproduce. Stand up an new fresh pgSQL cluster, make sure it works on its own and then give it to pcs/PAF to manage, as soon as that happens things "brake" and pcs/PAF will show something like:
...
ailed Resource Actions:
  * pgsqld_monitor_15000 on c8kubernode3 'error' (1): call=892, status='Timed Out', exitreason='', last-rc-change='2021-07-09 15:05:28 +01:00', queued=0ms, exec=10002ms   * pgsqld_promote_0 on c8kubernode1 'error' (1): call=896, status='complete', exitreason='c8kubernode3 is the best candidate to promote, aborting current promotion', last-rc-change='2021-07-09 15:05:10 +01:00', queued=0ms, exec=139ms

regards, L

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to