Hi Tom/David
Could you please help me getting started to optimise this query??
Thanks & Regards
Shubham mittal
On Tue, Sep 7, 2021, 8:57 PM Michael Lewis wrote:
> Have you ever used this site to visualize the explain plan and spot bad
> estimates and slow nodes? https://explain.depesz.com/s/
Have you ever used this site to visualize the explain plan and spot bad
estimates and slow nodes? https://explain.depesz.com/s/WE1R
This stands out to me-
*Subquery Scan on npiactionjoin (cost=10,165,289.40..10,167,192.01 rows=293
width=16) (actual time=118,413.432..118,806.684 rows=446,782
loops
El día lunes, septiembre 06, 2021 a las 11:45:34p. m. +0530, Shubham Mittal
escribió:
> 20 Lakh is the current no of rows in the task table.. on which the query is
> executed..
Ahh, I never came accross this (Indian) unit 'lakh' and now understand
that we're are talking about https://en.wikipedi
po 6. 9. 2021 v 20:14 odesílatel Matthias Apitz napsal:
>
>
> What does the term 'over 20Lakh rows' mean? Thanks
AFAIK in India (and surrounding areas) 20 Lakh = 20 * 100 000 = 2 000 000
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
> Publ
20 Lakh is the current no of rows in the task table.. on which the query is
executed..
On Mon, Sep 6, 2021, 11:44 PM Matthias Apitz wrote:
>
> What does the term 'over 20Lakh rows' mean? Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-3890
What does the term 'over 20Lakh rows' mean? Thanks
matthias
--
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub
August 13, 1961: Better a wall than a war. And, while the GDR was still
existing,
no German troup
"David G. Johnston" writes:
> On Thu, Sep 2, 2021 at 3:16 PM Shubham Mittal
> wrote:
>> *Please help in optimizing this query. I need to actually generate reports
>> daily using this query.. It takes almost 15 to 20 min to execute this query
>> due to joins.. *
> Use jsonb_populate_recordset (or
On Thu, Sep 2, 2021 at 3:16 PM Shubham Mittal
wrote:
> Hi ,
>
> *Please help in optimizing this query. I need to actually generate reports
> daily using this query.. It takes almost 15 to 20 min to execute this query
> due to joins.. *
> *Here common_details is a jsonB column.*
>
> SELECT T.order
What is T and how many rows are in there? How many rows in task_history?
What indexes exist? Are you confident you want 2 million rows in that
result set? What version is this on? What pg_settings have been changed
from defaults?