On Sat, Jan 31, 2026 at 10:05 PM Ron Johnson <[email protected]> wrote:
> On Sat, Jan 31, 2026 at 2:47 PM yudhi s <[email protected]> > wrote: > >> Thank you. >> >>> >>> 1) Without even looking at the plan I'm going to say 2-VCPU and 16GB RAM >>> and is insufficient resources for what you want to do. >>> >>> >> Can you please explain a bit in detail, how much minimum VCPU and RAM >> will be enough resources to suffice this requirement? and you normally do >> that calculation? >> >> >>> 2) You will need to provide the schema definitions for the tables >>> involved. >>> >>> Do you mean table DDL or just the index definitions on the tables should >> help? >> >> Also i was trying to understand , by just looking into the "explain >> analyze" output, is there any way we can tie the specific step in the plan >> , which is the major contributor of the cpu resources? Such that we can >> then try to fix that part rather than looking throughout the query as its >> big query? >> > > It looks like 71% (748ms of a total 1056ms) of elapsed time is taken by > the c_1.tran_date external sort on line 150. > > That, obviously, is what you should work on. > > 1. You say you increased work_mem. From what, to what? > 2. But that it did not reduce execution time. Please post the EXPLAIN > from after increasing work_mem. > 3. Did you remember to run SELECT pg_reload_conf(); after increasing > work_mem? > 4. Is there an index on APP_schema.txn_tbl.tran_date? > > And if any suggestion to improve the TOP-N queries where the base table >> may have many rows in it. >> > > The DBtune Free Edition <http://app.dbtune.com> can help you find the correct adjustments for work_mem and other server parameters <https://dbtune.com/blog/dbtunes-multi-dimensional-performance-tuning-space>. It's designed to help optimize your PostgreSQL runtime for your current hardware setup. > -- > Death to <Redacted>, and butter sauce. > Don't boil me, I'm still alive. > <Redacted> lobster! >
