On 1/31/26 11:46, yudhi s wrote:
Thank you.


    1) Without even looking at the plan I'm going to say 2-VCPU and 16GB
    RAM
    and is insufficient resources for what you want to do.


Can you please explain a bit in detail, how much minimum VCPU and RAM will be enough resources to suffice this requirement? and you normally do that calculation?

Don't know what the minimum requirements are. It would depend on many variables 1) The plan being chosen, which in turn depends on the schema information as well as the data turnover. 2) What the VCPU is actually emulating. 3) The efficiency of of the virtual machines/containers with regard to accessing memory and storage. 4) The service limits of the virtualization. 5) What the storage system and how performant it is.

In other words this is something you will need to test and derive your own formula for.


    2) You will need to provide the schema definitions for the tables
    involved.

Do you mean table DDL or just the index definitions on the tables should help?

Basically what you get in psql when you do \d some_table.


Also i was trying to understand , by just looking into the "explain analyze" output, is there any way we can tie the specific step in the plan , which is the major contributor of the cpu resources? Such that we can then try to fix that part rather than looking throughout the query as its big query?

And if any suggestion to improve the TOP-N queries where the base table may have many rows in it.


--
Adrian Klaver
[email protected]


Reply via email to