Re:Explain plan shows fewer shared blocks when index+table compared to index alone?
Hello This block is reading and checking the visibility map, I think. We don't have to check the visibility map during the index scan - we still need to get a tuple from the table, we can check the visibility for current transaction there. With index only scan, we need to check the visibility map: if the tuple is visible to all transactions, then we return it. Otherwise, we read the tuple from the table as in the index scan (this is your case, as indicated by "Heap Fetches: 1") Index only scan does not mean that we will not read the tuple from the table. It means that we can skip reading the table if the visibility map allows it for given tuple. regards, Sergei
Why are commits consuming most of the database time?
The test data below is from a non-virtualized (client system and database server) Postgres 14 environment, with no replication, no high availability, and with no load balancing. This environment has older and slower disk drives, and the test is driven by a single client process. In this case 24% of the round trips (client to database and back) are for commit processing. However, commit processing is consuming 89% of the total database time. (All times are measured from within the client.) In this non-virtualized environment, on the exact same hardware, other RBMSs have a much lower commit-time/total-database-time ratio. In a virtualized environment (both client system and database server) are running in separate VMs with faster disks and with possibly many other active VMs this number drops to about 70% for Postgres. We see similar results in Linux environments as well. *What is a good approach to identifying what is happening within the commit processing?* *Are there any known bugs in this area?* Any other thoughts would be greatly appreciated. Thank you. -Tim LineFreq Cum.tMax.t Avg.t Rows Err. Statement 1 2268 *301.908* 0.243 0.133 2235 0COMMIT 2 7559.6650.102 0.013 2326 0INSERT INTO POMQUERY_U ( col0 ) VALUES (:1) 3 2660.1950.103 0.001 263 0SELECT t_01.puid FROM PITEM t_01 WHERE ( UPPER ( t_01.pitem_id ) = UPPER( :1 ) ) 4 2440.1860.002 0.001 260 0INSERT INTO POM_TIMESTAMP (puid, ptimestamp, pdbtimestamp, pdeleted) (SELECT :1, :2, now() ... [...snip...] Sum: 9264 *338.200* - - 12050 - Percent Commit 24%*89%* My latest run was similar, in that its total database time was 14876.691 seconds with total commit time of 13032.575 seconds, or 88% commit time. Postgres Version: PostgreSQL 14.5, compiled by Visual C++ build 1914, 64-bit OS Name: Microsoft Windows Server 2019 Standard OS Version: 10.0.17763 N/A Build 17763
Re: Why are commits consuming most of the database time?
Tim Slechta writes: > The test data below is from a non-virtualized (client system and database > server) Postgres 14 environment, with no replication, no high availability, > and with no load balancing. This environment has older and slower disk > drives, and the test is driven by a single client process. > In this case 24% of the round trips (client to database and back) are for > commit processing. However, commit processing is consuming 89% of the > total database time. (All times are measured from within the client.) You didn't say how big the transactions are, but if they're not writing a lot of data apiece, this result seems totally non-surprising. The commits have to push WAL log data down to disk before they can promise that the transaction's results are durable, while the statements within the transactions probably are not waiting for any disk writes at all. If you don't need strict ACID compliance, you could turn off synchronous_commit so that commits don't wait for WAL flush. (This doesn't risk the consistency of your database, but it does mean that a crash might lose the last few transactions that clients were told got committed.) If you do need strict ACID compliance, get a better disk subsystem. Or, perhaps, just a better OS ... Windows is generally not thought of as the best-performing platform for Postgres. regards, tom lane
Re: Why are commits consuming most of the database time?
Tom, Thank you for your comments, they are very much appreciated. You are correct that the transactions are typically short, likely with dozens of rows. Do you know of any problems or defects in this area? Would there be any usefulness to generating Postgres log files? Once again, thanks for your help. -Tim On Tue, Apr 4, 2023 at 10:55 AM Tim Slechta wrote: > > > On Tue, Apr 4, 2023 at 9:57 AM Tom Lane wrote: > >> Tim Slechta writes: >> > The test data below is from a non-virtualized (client system and >> database >> > server) Postgres 14 environment, with no replication, no high >> availability, >> > and with no load balancing. This environment has older and slower disk >> > drives, and the test is driven by a single client process. >> >> > In this case 24% of the round trips (client to database and back) are >> for >> > commit processing. However, commit processing is consuming 89% of the >> > total database time. (All times are measured from within the client.) >> >> You didn't say how big the transactions are, but if they're not writing >> a lot of data apiece, this result seems totally non-surprising. The >> commits have to push WAL log data down to disk before they can promise >> that the transaction's results are durable, while the statements within >> the transactions probably are not waiting for any disk writes at all. >> >> If you don't need strict ACID compliance, you could turn off >> synchronous_commit so that commits don't wait for WAL flush. >> (This doesn't risk the consistency of your database, but it does >> mean that a crash might lose the last few transactions that clients >> were told got committed.) >> >> If you do need strict ACID compliance, get a better disk subsystem. >> Or, perhaps, just a better OS ... Windows is generally not thought of >> as the best-performing platform for Postgres. >> >> regards, tom lane >> >
