On Mon, Feb 2, 2026 at 8:57 PM Ron Johnson <[email protected]> wrote:
> >> My apologies if i misunderstand the plan, But If I see, it's spending >> ~140ms(140ms-6ms) i.e. almost all the time now, in performing the below >> nested loop join. So my question was , is there any possibility to reduce >> the resource consumption or response time further here? Hope my >> understanding is correct here. >> >> -> Nested Loop (cost=266.53..1548099.38 rows=411215 width=20) (actual >> time=*6.009..147.695* rows=1049 loops=1) >> Join Filter: ((df.ent_id)::numeric = m.ent_id) >> Rows Removed by Join Filter: 513436 >> Buffers: shared hit=1939 >> > > I don't see m.ent_id in the actual query. Did you only paste a portion > of the query? > > Also, casting in a JOIN typically brutalizes the ability to use an index. > > > Thank you. Actually i tried executing the first two CTE where the query was spending most of the time and teh alias has changed. Also here i have changed the real table names before putting it here, hope that is fine. However , i verified the data type of the ent_id column in "ent" its "int8" and in table "txn_tbl" is "numeric 12", so do you mean to say this difference in the data type is causing this high response time during the nested loop join? My understanding was it will be internally castable without additional burden. Also, even i tried creating an index on the "(df.ent_id)::numeric" its still reulting into same plan and response time.
