On Tue, Feb 3, 2026 at 1:50 PM yudhi s <[email protected]> wrote:

>
>
> On Tue, 3 Feb, 2026, 9:37 pm Adrian Klaver, <[email protected]>
> wrote:
>
>> On 2/3/26 07:59, Ron Johnson wrote:
>>
>> >
>> >
>> > There is no VARCHAR or CHAR; there is only TEXT.  Thus, this is 100%
>> > expected and normal.
>>
>> What Ron is saying is that there are varchar and char types, but they
>> boil down to text per:
>>
>> https://www.postgresql.org/docs/current/datatype-character.html
>>
>> "text is PostgreSQL's native string data type, in that most built-in
>> functions operating on strings are declared to take or return text not
>> character varying. For many purposes, character varying acts as though
>> it were a domain over text."
>>
>> As to performance see:
>>
>> "
>> Tip
>>
>> There is no performance difference among these three types, apart from
>> increased storage space when using the blank-padded type, and a few
>> extra CPU cycles to check the length when storing into a
>> length-constrained column. While character(n) has performance advantages
>> in some other database systems, there is no such advantage in
>> PostgreSQL; in fact character(n) is usually the slowest of the three
>> because of its additional storage costs. In most situations text or
>> character varying should be used instead.
>> "
>>
>
> Thank you. I was looking into those casting(::text) in the explain plan
> output in similar way (as it was happening for int8 to numeric join
> scenario) and was thinking, may be it's spending some cpu cycles on doing
> these ::text casting behind the scenes for that column and if there is
> someway(data type change) to stop those. But from your explanation, it
> looks like those representation in the query plan is normal and have no
> performance overhead as such. Thanks again.
>
> In regards to the below, "nested loop" having response time of 100ms. I
> understand, here the casting function us now removed after changing the
> data type of columns to match in both side of the join.
>
> So, is this expected to do a nested loop on 500k rows to take 100ms?
>

HAVE YOU ANALYZED THE TABLES?


>
> ->  Nested Loop  (cost=262.77..1342550.91 rows=579149 width=20) (*actual
> time=6.406..107.946* rows=1049 loops=1)
>               Join Filter: (*df.ent_id = m.ent_id*)
>               Rows Removed by Join Filter: 514648
>               Buffers: shared hit=1972
>

Decompose complex problems into a small problem, then start adding stuff.

https://gist.github.com/databasetech0073/6688701431dc4bf4eaab8d345c1dc65f

In this case, I would run SELECT * FROM limited_txns, to get a base
EXPLAIN, then strip out all WHERE clauses, the ORDER BY and the LIMIT then
run it again for another EXPLAIN.

Then add back lines 33-34 and EXPLAIN.  Then line 7, etc, etc saving each
EXPLAIN.  See what makes it break.

-- 
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

Reply via email to