Issue with COMMITs stuck on "AccessExclusiveLock on object 0 of class 1262 of database 0"

2019-03-29 Thread Martijn van Oosterhout
443, 32452, 33923,
33900, 31701, 33827, 33844, 33427, 33359, 34012, 34005, 32054, 26820,
32788, 33587, 34062, 32282, 34127, 34129.
2019-03-27 01:40:43 UTC [33923-3] [th.ei] app_ei@app_ei STATEMENT:  COMMIT
2019-03-27 01:40:43 UTC [33900-1] [th.t6] app_t6@app_t6 LOG:  process 33900
still waiting for AccessExclusiveLock on object 0 of class 1262 of database
0 after 1000.080 ms
2019-03-27 01:40:43 UTC [33900-2] [th.t6] app_t6@app_t6 DETAIL:  Process
holding the lock: 33162. Wait queue: 33226, 32565, 32443, 32452, 33923,
33900, 31701, 33827, 33844, 33427, 33359, 34012, 34005, 32054, 26820,
32788, 33587, 34062, 32282, 34127, 34129, 34080, 33161.
2019-03-27 01:40:43 UTC [33900-3] [th.t6] app_t6@app_t6 STATEMENT:  COMMIT
-- 
Martijn van Oosterhout  http://svana.org/kleptog/


Re: unexpected lock waits (was Re: [GENERAL] Do not understand why this happens)

2019-06-09 Thread Martijn van Oosterhout
(sorry, very old thread)

On Fri, Mar 15, 2013 at 09:38:06AM -0400, Tom Lane wrote:
> Bill Moran  writes:
> > I do wonder what else is happening in the transaction that you're
> > calling NOTIFY within; and that some other statement could be causing
> > the lock wait.
> 
> FWIW, the lock seems to be the one taken to serialize insertions into
> the shared NOTIFY queue, from this bit in commands/async.c:

[SNIP]

> This lock is held while inserting the transaction's notify message(s),
> after which the transaction commits and releases the lock.  There's not
> very much code in that window.  So what we can conclude is that some
> other transaction also doing NOTIFY hung up within that sequence for
> something in excess of 3 seconds.  We have been shown no data whatsoever
> that would allow us to speculate about what's causing that other
> transaction to take so long to get through its commit sequence.

I just want to add that after running into this very same issue (see
[1]) that in our case the above conclusion is incorrect.  It is not the
NOTIFYing transactions that are holding the lock too long, but the
LISTENing backends. In our case it is because we have lots of databases
and all databases share a single global NOTIFY queue.

To verify this I made some small patches that significantly reduce the
time LISTENing backends hold the lock and they reduce the problem
significantly for us, see [2].  A slow commit does have a bit of
impact, but the bulk of the time is elsehwere.

[1]: 
https://www.postgresql.org/message-id/cadwg95t0j9zf0uwdcmh81kmndsitavhxmbvgyqrrjcd-ilw...@mail.gmail.com

[2]: 
https://www.postgresql.org/message-id/cadwg95ulhar1uq6pqloy1mtqyen23c1dvor2tvjcxubz1ge...@mail.gmail.com

Hope this helps.
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> The combine: one man, one day, wheat for half a million loaves of bread.