Re: Query optimization

2025-03-14 Thread Greg Sabino Mullane
On Thu, Mar 13, 2025 at 11:49 PM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:

> To return one row takes 43ms is not optimal
>

It's actually much faster than that, but even 43ms is overall good. The
query is already pretty optimal, as it uses a single index only scan. There
are a few tricks left to make this potentially faster, but you may want to
look into using some sort of in-memory caching system if your application
is that sensitive to timings. Or find us a version in which the execution
time is not 1/20th of a millisecond and we can work on that one.

Also take a look at pg_stat_statements so you can see how fast the query is
on average.

Cheers,
Greg

--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support


Re: Query optimization

2025-03-14 Thread Durgamahesh Manne
On Fri, 14 Mar, 2025, 09:11 Ron Johnson,  wrote:

> On Thu, Mar 13, 2025 at 11:25 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> On Fri, Mar 14, 2025 at 8:19 AM Ron Johnson 
>> wrote:
>>
>>> On Thu, Mar 13, 2025 at 10:16 PM Durgamahesh Manne <
>>> maheshpostgr...@gmail.com> wrote:
>>> [snip]
>>>
 Hi Adrian Klaver

 1) Postgres version.
  select version();
 version

 ---
  PostgreSQL 14.12 on aarch64-unknown-linux-gnu, compiled by gcc (GCC)
 7.3.1 20180712 (Red Hat 7.3.1-6), 64-bit

 2) Complete(including indexes) table schema.

   Table
 "liveaggregations.cachekeys"
 Column |  Type  | Collation | Nullable |
 Default | Storage  | Compression

 ---++---+--+-+--+
  cachetype | character varying(255) |   |  |
   | extended |
  trsid | character varying(255) |   |  |
   | extended |
  brandid   | character varying(255) |   |  |
   | extended |
  sportid   | character varying(255) |   |  |
   | extended |
  competitionid | character varying(255) |   |  |
   | extended |
  eventid   | character varying(255) |   |  |
   | extended |
  marketid  | character varying(255) |   |  |
   | extended |
  selectionid   | character varying(255) |   |  |
   | extended |
  keytype   | character varying(255) |   |  |
   | extended |
  key   | character varying(255) |   | not null |
   | extended |
 Indexes:
 "cachekeys_key_pk" PRIMARY KEY, btree (key)
 "idx_cachekeys" btree (cachetype, trsid, brandid, sportid,
 competitionid, eventid, marketid)
 "idx_marketid" btree (marketid)

 3) Output of EXPLAIN ANALYZE of query.

  Result  (cost=2.80..2.83 rows=1 width=1) (actual time=0.030..0.030
 rows=1 loops=1)
InitPlan 1 (returns $0)
  ->  Index Only Scan using idx_cachekeys on cachekeys
  (cost=0.55..2.80 rows=1 width=0) (actual time=0.029..0.029 rows=1 loops=1)
Index Cond: ((cachetype = 'BoMatrix'::text) AND (trsid =
 'daznbetuk'::text) AND (brandid = 'daznbet'::text) AND (sportid =
 'BOX'::text) AND (competitionid = 'U-1998'::text) AND (eventid =
 'U-523596'::text))
Heap Fetches: 0
  Planning Time: 0.221 ms
  Execution Time: 0.046 ms

>>>
>>> That looks pretty reasonable.
>>>
>>> 1. Now show what happens with the LIMIT clause.
>>> 2. How many rows does it return?
>>> 3. Do you keep the table regularly vacuumed and analyzed?
>>>
>>> Hey Ron
>>
>> 1. Now show what happens with the LIMIT clause.
>> and result set of query  and *Size of the table 287MB*
>>  exists
>> 
>>  t
>> (1 row)
>>
>> --
>>  Result  (cost=2.80..2.83 rows=1 width=1) (actual time=0.029..0.030
>> rows=1 loops=1)
>>InitPlan 1 (returns $0)
>>  ->  Index Only Scan using idx_cachekeys on cachekeys
>>  (cost=0.55..2.80 rows=1 width=0) (actual time=0.028..0.028 rows=1 loops=1)
>>Index Cond: ((cachetype = 'BoMatrix'::text) AND (trsid =
>> 'daznbetuk'::text) AND (brandid = 'daznbet'::text) AND (sportid =
>> 'BOX'::text) AND (competitionid = 'U-1998'::text) AND (eventid =
>> 'U-523596'::text))
>>Heap Fetches: 1
>>  Planning Time: 0.084 ms
>>  Execution Time: 0.043 ms
>>
>
> This might be due to caching.  Run the query with LIMIT three times, and
> then remove the LIMIT and run three times.
>
> Honestly, though, the execution timings seem pretty good.  What exactly is
> the problem?
>
> --
> Death to , and butter sauce.
> Don't boil me, I'm still alive.
>  lobster!
>

Hi Team and Andrian

LIMIT is not necessary to use in select here in this case

To return one row takes 43ms is not optimal

Regards,
Durga Mahesh

>