Explain analyse with track_io_timing

2022-03-09 Thread Jayadevan M
Hello team,

What is the unit of I/O Timings in explain (analyze, buffers) ? There is a
plan with quite a few nodes.  In each case, the value of I/O Timings is
much more than the time for the outer node. A few lines from the plan -

 Hash Left Join  (cost=14320945.22..7099974624.27 rows=194335062701
width=5511) (actual time=107913.021..*108109*.313 rows=759 loops=1)
   Buffers: shared hit=738871 read=1549646, temp read=92710 written=92973
   I/O Timings: read=*228324*.357
   ->  Hash Left Join  (cost=14049069.69..246411189.41 rows=18342148438
width=5467) (actual time=96579.630..*96774*.534 rows=759 loops=1)
 Buffers: shared hit=684314 read=1377851, temp read=92710
written=92973
 I/O Timings: read=*217899*.233
At the end, there is
Execution Time: 108117.006 ms

So it takes about 108 seconds. But the I/O Timings are higher.

Best Regards,
Jay


Re: Explain analyse with track_io_timing

2022-03-09 Thread Jayadevan M
Is it a parallel query?  If yes the total time is only the time spent in the
> main process, and the IO time is sum of all IO time spent in main process
> and
> the parallel workers, which can obviously be a lot more than the total
> execution time.
>
Yes, there are parallel workers, that explains it. Thank you.
Regards,
Jay


Re: High concurrency same row (inventory)

2019-07-29 Thread Jayadevan M
On Mon, Jul 29, 2019 at 11:46 AM Jean Baro  wrote:

> Hello there.
>
> I am not an PG expert, as currently I work as a Enterprise Architect (who
> believes in OSS and in particular PostgreSQL 😍). So please forgive me if
> this question is too simple. 🙏
>
> Here it goes:
>
> We have a new Inventory system running  on its own database (PG 10 AWS
> RDS.m5.2xlarge 1TB SSD EBS - Multizone). The DB effective size is less than
> 10GB at the moment. We provided 1TB to get more IOPS from EBS.
>
> As we don't have a lot of different products in our catalogue it's quite
> common (especially when a particular product is on sale) to have a high
> rate of concurrent updates against the same row. There is also a frequent
> (every 30 minutes) update to all items which changed their current
> stock/Inventory coming from the warehouses (SAP), the latter is a batch
> process. We have just installed this system for a new tenant (one of the
> smallest one) and although it's running great so far, we believe this
> solution would not scale as we roll out this system to new (and bigger)
> tenants. Currently there is up to 1.500 transactions per second (mostly
> SELECTS and 1 particular UPDATE which I believe is the one being
> aborted/deadlocked some tImes) in this inventory database.
>
Monitoring the locks and activities, as described here, may help -
https://wiki.postgresql.org/wiki/Lock_Monitoring

Regards,
Jayadevan