Hi
čt 6. 12. 2018 v 12:18 odesílatel Chris Withers napsal:
> On 06/12/2018 11:00, Alexey Bashtanov wrote:
> >
> >> I'm loath to start hacking something up when I'd hope others have done
> >> a better job already...
> > If you log all queries that take more than a second to complete, is your
> >
On 06/12/2018 11:00, Alexey Bashtanov wrote:
I'm loath to start hacking something up when I'd hope others have done
a better job already...
If you log all queries that take more than a second to complete, is your
update the only one logged, or something (the would-be blocker) gets
logged down
Is there any existing tooling that does this?
There must be some, google for queries involving pg_locks
I'm loath to start hacking something up when I'd hope others have done
a better job already...
If you log all queries that take more than a second to complete, is your
update the only one
On 05/12/2018 15:47, Rene Romero Benavides wrote:
Also read about hot updates and the storage parameter named
"fill_factor", so, data blocks can be recycled instead of creating new
ones if the updated fields don't update also indexes.
I have read about these, but I'd prefer not to be making
o
On 05/12/2018 15:40, Alexey Bashtanov wrote:
One of the reasons could be the row already locked by another backend,
doing the same kind of an update or something different.
Are these updates performed in a longer transactions?
Nope, the transaction will just be updating one row at a time.
This parameter can be updated on a "per table" basis.
Am Mi., 5. Dez. 2018 um 09:47 Uhr schrieb Rene Romero Benavides <
rene.romer...@gmail.com>:
> Also read about hot updates and the storage parameter named "fill_factor",
> so, data blocks can be recycled instead of creating new ones if the upda
Also read about hot updates and the storage parameter named "fill_factor",
so, data blocks can be recycled instead of creating new ones if the updated
fields don't update also indexes.
Am Mi., 5. Dez. 2018 um 09:39 Uhr schrieb Alexey Bashtanov
:
>
> >
> > The table has around 1.5M rows which have
The table has around 1.5M rows which have been updated/inserted around
121M times, the distribution of updates to row in alerts_alert will be
quite uneven, from 1 insert up to 1 insert and 0.5M updates.
Under high load (200-300 inserts/updates per second) we see occasional
(~10 per hour)