Re: pgaudit.log_parameter

2019-07-31 Thread Artur Zakirov

Hello,

On 31.07.2019 14:21, Luca Ferrari wrote:

Hello,
I'm a little confused about the setting pgaudit.log_parameter of the
pgaudit extension
(https://github.com/pgaudit/pgaudit/blob/master/README.md).
What's the purpose of this? AN example of query that will trigger such
parameter logging? Apparently I cannot get it providing me more
information than ''.


pgaudit.log_parameter allows to log parameters of prepared statements. 
See the documentation:


https://www.postgresql.org/docs/current/sql-prepare.html

The following example logs parameters if pgaudit.log_parameter is on:

EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00);

It should log parameters: 1, 'Hunter Valley', 't', 200.00

--
Artur




Re: FTS and tri-grams

2021-01-06 Thread Artur Zakirov
On Tue, Jan 5, 2021 at 10:26 PM Mark Phillips
 wrote:
> We now wonder if tri-gram alone can achieve a fast full text search result. 
> If so, then we could the tsvector column be dropped?
>
> 1. Is FTS required for tri-gram to work?
> 2. Are these independent of each other?

FTS and tri-gram are independent features. You can implement them
independently, but it might depend on your use case.

> 3. Is tri-gram alone sufficient for a “full text search” feature?

It isn't sufficient if you want exact and/or phrase search.

-- 
Artur




Re: How to make a map in pg kernel?

2023-10-06 Thread Artur Zakirov
Hello,

On Thu, 5 Oct 2023 at 16:42, jacktby jacktby  wrote:
>
> Hi, I’m writing some kernel codes in pg15, and now I need to make a map 
> struct, I know c-lang doesn’t support this, so does pg support an internal 
> struct? Hopefully your replies.

PostgreSQL has hash tables:
https://github.com/postgres/postgres/blob/REL_16_0/src/include/utils/hsearch.h
This is not exactly a map struct. But it might be suitable for your case.

-- 
Artur




Re: Pgxs - How to reference another extension

2024-03-11 Thread Artur Zakirov
On Mon, 11 Mar 2024 at 13:26, Michał Kłeczek  wrote:
>
>
> > On 11 Mar 2024, at 11:41, Michał Kłeczek  wrote:
> >
> > Hi,
> >
> > I am trying to create an extension that delegates some calls to btree_gist 
> > functions:
> >
> > DirectFunctionCall5Coll(
> >gbt_text_consistent, …other arguments);
> >
> > Basic PGXS Makefile does not work - I get linker error:
> >
> > Undefined symbols for architecture arm64:
> >  "_gbt_text_consistent", referenced from:
> >
> >
> > Anyone could provide me with some hints?
>
> I’ve added:
> PG_LDFLAGS += -L$(shell $(PG_CONFIG) --pkglibdir) -lbtree_gist

You can try FunctionCall5Coll() or OidFunctionCall5Coll() functions.

OidFunctionCall5Coll() calls fmgr_info() and FunctionCall5Coll(). What
you only need is Oid of the target function.

FunctionCall5Coll() can be useful and used when you have a cache
variable of the function, where you store previously calculated
FmgrInfo.

-- 
Artur




Re: Performance regression when adding LIMIT 1 to a query

2025-04-07 Thread Artur Zakirov
On Mon, 7 Apr 2025 at 14:45, Costa Alexoglou  wrote:
> ...
> with a plan:
> ```
> Gather Merge  (cost=115584.47..118515.35 rows=25120 width=824) (actual 
> time=46.004..74.267 rows=29653 loops=1)
>   Workers Planned: 2
>   Workers Launched: 2
>   ->  Sort  (cost=114584.45..114615.85 rows=12560 width=824) (actual 
> time=41.200..47.322 rows=9884 loops=3)
> Sort Key: id
> Sort Method: external merge  Disk: 16360kB
> Worker 0:  Sort Method: external merge  Disk: 15552kB
> Worker 1:  Sort Method: external merge  Disk: 14536kB
> ->  Parallel Bitmap Heap Scan on databases_metrics  
> (cost=990.77..109175.83 rows=12560 width=824) (actual time=3.326..14.295 
> rows=9884 loops=3)
>   Recheck Cond: ((db_instance_id = 
> 'c4c97a60-b88e-4cd3-a2f1-random-uuid'::uuid) AND (created_at >= '2023-03-15 
> 10:00:00+00'::timestamp with time zone) AND (created_at <= '2025-04-03 
> 10:00:00+00'::timestamp with time zone))"},
>   Filter: ((metrics -> 'perf_average_query_runtime'::text) IS NOT 
> NULL)
>   Rows Removed by Filter: 68
>   Heap Blocks: exact=4272
>   ->  Bitmap Index Scan on 
> idx_databases_metrics_instance_date_custom_created_debugging  
> (cost=0.00..983.24 rows=30294 width=0) (actual time=3.786.786 rows=29856 
> loops=1)"},
> Index Cond: ((db_instance_id = 
> 'c4c97a60-b88e-4cd3-a2f1-random-uuid'::uuid) AND (created_at >= '2023-03-15 
> 10:00:00+00'::timestamp with time zone) AND (created_at <= '2025-04-03 
> 10:00:00+00'::timestamp with time zone))"},
> ...
> With a plan:
> ```
> Limit  (cost=0.43..229.66 rows=1 width=824) (actual time=7538.004..7538.005 
> rows=1 loops=1)
>   ->  Index Scan using databases_metrics_pkey on databases_metrics  
> (cost=0.43..6909156.38 rows=30142 width=824) (actual time=7538.002..7538.003 
> rows=1 loops=1)
> Filter: ((created_at >= '2023-03-15 10:00:00+00'::timestamp with time 
> zone) AND (created_at <= '2025-04-03 10:00:00+00'::timestamp with time zone) 
> A((metrics -> 'perf_average_query_runtime'::text) IS NOT NULL) AND 
> (db_instance_id = 'c4c97a60-b88e-4cd3-a2f1-random-uuid'::uuid))"},
> Rows Removed by Filter: 10244795
> Planning Time: 0.128 ms
> Execution Time: 7538.032 ms
> ```

On your second query Postgres uses the index "databases_metrics_pkey".
I assume that it is built using the "id" column. It could be very fast
with the statement "ORDER BY ... LIMIT", but due to the additional
filter Postgres firstly has to remove 10mln rows, which doesn't
satisfy the filter, only to reach one single row.

On the first query Postgres has to read and sort only 29k rows using
the index "idx_databases_metrics_instance_date_custom_created_debugging",
which is better suited for the used filter if it includes the columns
used in the filter.

I'm not sure why Postgres chooses the index "databases_metrics_pkey".
Maybe you have outdated statistics. Did you try to run VACUUM ANALYZE
on the table?

-- 
Kind regards,
Artur




Re: ERROR: could not read block 0 in file when creating an index out of a function

2025-03-12 Thread Artur Zakirov
Hey,

On Wed, 12 Mar 2025 at 10:11, Luca Ferrari  wrote:
> Now, according to the documentation, the function f_t is immutable
> since it is not modifying the database, so what is going on? And why
> is the same function working if the table has not the constraint on
> the column?

I can reproduce this with the table `t` on PG 15.10.

In your case `base/357283/365810` file is a new index file. For some
reason Postgres tries to read the new index. I suppose this is because
during reading the table `t` within the function `f_t` it tries to
access the new index.

According to the documentation, IMMUTABLE functions should not only
modify the database, but also return the same results given the same
arguments forever, which might not be true when you query a table
within such a function. Such a function should be defined as STABLE or
VOLATILE.

-- 
Kind regards,
Artur