error in trigger creation

2024-04-21 Thread yudhi s
Hi All,
We are seeing privilege issues while creating event triggers. It says the
user "*must be a superuser to create an event trigger*".

So my question is , if we have application user as "app_user" which is
responsible for creating database objects in schema "app_schema" and also
we have all scripts executed in the database through user "app_user" only,
then is there any way to grant necessary privilege or to equip the
app_user, so that it will be able to assign the privilege, so we will be
able to create the event trigger without need to run the event trigger
script from super user itself? This will ensure that we do not need "super
user" every time, when we want to run scripts to have an event
trigger created for our application schema.
Something like "*grant create event trigger on schema app_schema to
app_user*"?
Regards
Yudhi


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sunday, April 21, 2024, yudhi s  wrote:

> so that it will be able to assign the privilege, so we will be able to
> create the event trigger without need to run the event trigger script from
> super user itself?
>

Write a security-definer function owned by superuser and grant app_user
permission to execute it.

David J.


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Sunday, April 21, 2024, yudhi s  wrote:
>
>> so that it will be able to assign the privilege, so we will be able to
>> create the event trigger without need to run the event trigger script from
>> super user itself?
>>
>
> Write a security-definer function owned by superuser and grant app_user
> permission to execute it.
>
> David J.
>
>

Thank You David.

 Are you saying something like below, in which we first create the function
from super user and then execute the grant? But doesn't that mean, each
time we want to create a new event trigger we have to be again dependent on
the "super user" to modify the security definer function?

CREATE OR REPLACE FUNCTION create_event_trigger_func()
  RETURNS void
  LANGUAGE plpgsql
  SECURITY DEFINER
  AS $$
  BEGIN
EXECUTE 'CREATE EVENT TRIGGER event_trigger_name ON schema_name ...';
  END;
$$;

GRANT EXECUTE ON FUNCTION create_event_trigger_func() TO app_user;


Re: Logging statement having any threat?

2024-04-21 Thread Lok P
On Sat, Apr 20, 2024 at 10:02 PM Adrian Klaver 
wrote:

>
> Have you tried?:
>
>
> https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>
> "
> log_statement (enum)
>
><...>
>
> The default is none. Only superusers and users with the appropriate SET
> privilege can change this setting.
> "
>
> Or
>
>
> https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SET
>
> set_config ( setting_name text, new_value text, is_local boolean ) → text
>
>
> >
> > Now when we reach out to the infrastructure team , they are saying these
> > variables(pg_cluster_log_statement,pg_instance_log_statement) were
>
> Where are those variables coming from? I can not find them in RDS or
> Terraform docs.
>
>
 Thank You Adrian.

Actually I was trying to understand if the auto_explain can only work and
help us see the slow sql statements in the log, only after we set the
"log_statement" parameter to non default values (like all, mod, ddl)?

And what is the exact threat with the logging these queries , and i think
,I got the point as you mentioned , having access to database  itself is
making someone to see the object details, however do you agree that in case
of RDS logs are available through different mediums like cloud watch, data
dog agent etc , so that may pose additional threats as because , may be
some person doesn't have access to database directly but still having
permission to see the logs, so the appropriate access control need to put
in place?

And additionally I was trying to execute the "SELECT
set_config('log_statement', 'all', true);" but it says "*permission denied
to set parameter "log_statement*".".So might be it needs a higher
privileged user to run it.

To answer your question on the variable those we have on the
terraform module, the terraform module is customized by the database infra
team so that might be why we are seeing those there which may not be
exactly the same as its showing in RDS docs for postgres.

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sunday, April 21, 2024, yudhi s  wrote:

> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Sunday, April 21, 2024, yudhi s  wrote:
>>
>>> so that it will be able to assign the privilege, so we will be able to
>>> create the event trigger without need to run the event trigger script from
>>> super user itself?
>>>
>>
>> Write a security-definer function owned by superuser and grant app_user
>> permission to execute it.
>>
>> David J.
>>
>>
>
> Thank You David.
>
>  Are you saying something like below, in which we first create the
> function from super user and then execute the grant? But doesn't that mean,
> each time we want to create a new event trigger we have to be again
> dependent on the "super user" to modify the security definer function?
>

Dynamic SQL.  See “execute” in plpgsql.

David J.


Re: error in trigger creation

2024-04-21 Thread Tom Lane
"David G. Johnston"  writes:
> On Sunday, April 21, 2024, yudhi s  wrote:
>> Are you saying something like below, in which we first create the
>> function from super user and then execute the grant? But doesn't that mean,
>> each time we want to create a new event trigger we have to be again
>> dependent on the "super user" to modify the security definer function?

> Dynamic SQL.  See “execute” in plpgsql.

You might as well just give that user superuser and be done with it.
It's foolish to imagine that you have any shred of security left
if you're letting a user that's not 100.00% trusted write event
triggers.  (Much less execute any SQL command whatsoever, which
is what it sounds like David is suggesting you create a function
to do.)

regards, tom lane




Re: Logging statement having any threat?

2024-04-21 Thread Adrian Klaver

On 4/21/24 02:35, Lok P wrote:
On Sat, Apr 20, 2024 at 10:02 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:



Have you tried?:


https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
 


"
log_statement (enum)

    <...>

The default is none. Only superusers and users with the appropriate SET
privilege can change this setting.
"

Or

https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SET 


set_config ( setting_name text, new_value text, is_local boolean ) →
text


 >
 > Now when we reach out to the infrastructure team , they are
saying these
 > variables(pg_cluster_log_statement,pg_instance_log_statement) were

Where are those variables coming from? I can not find them in RDS or
Terraform docs.


  Thank You Adrian.

Actually I was trying to understand if the auto_explain can only work 
and help us see the slow sql statements in the log, only after we set 
the "log_statement" parameter to non default values (like all, mod, ddl)?


And what is the exact threat with the logging these queries , and i 


log_statement = 'mod'

create role pwd_test with password 'test';
CREATE ROLE

tail -f /var/log/postgresql/postgresql-16-main.log

<...>
2024-04-21 09:04:17.746 PDT [9664] postgres@test LOG:  statement: create 
role pwd_test with password 'test';


think ,I got the point as you mentioned , having access to database  
itself is making someone to see the object details, however do you agree 
that in case of RDS logs are available through different mediums like 
cloud watch, data dog agent etc , so that may pose additional threats as 


Aah, the joys of managed services where you have to check even more 
layers when building out your security.  Logging itself is not the 
issue, who has access to the logs is. The more access points the more 
difficult that gets. Dealing with this is going to require a system wide 
review by all parties and coming up with an agreed upon access policy 
that balances security with the need to monitor what is happening in the 
database. Otherwise troubleshooting issues will be a long drawn out 
process which in itself could end up being a security issue.



because , may be some person doesn't have access to database directly 
but still having permission to see the logs, so the appropriate access 
control need to put in place?


And additionally I was trying to execute the "SELECT 
set_config('log_statement', 'all', true);" but it says "/permission 
denied to set parameter "log_statement/".".So might be it needs a higher 
privileged user to run it.


To answer your question on the variable those we have on the 
terraform module, the terraform module is customized by the database 
infra team so that might be why we are seeing those there which may not 
be exactly the same as its showing in RDS docs for postgres.


https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.html
 




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: error in trigger creation

2024-04-21 Thread yudhi s
On Sun, Apr 21, 2024 at 7:55 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Sunday, April 21, 2024, yudhi s  wrote:
>
>> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Sunday, April 21, 2024, yudhi s  wrote:
>>>
 so that it will be able to assign the privilege, so we will be able to
 create the event trigger without need to run the event trigger script from
 super user itself?

>>>
>>> Write a security-definer function owned by superuser and grant app_user
>>> permission to execute it.
>>>
>>> David J.
>>>
>>>
>>
>> Thank You David.
>>
>>  Are you saying something like below, in which we first create the
>> function from super user and then execute the grant? But doesn't that mean,
>> each time we want to create a new event trigger we have to be again
>> dependent on the "super user" to modify the security definer function?
>>
>
> Dynamic SQL.  See “execute” in plpgsql.
>
> David J.
>
>

Even if we create the  event trigger using "security definer" function
embedding the "create event trigger" with in its body using dynamic
sql(something as below), and in future if we need to create another event
trigger , we need to again update the function and re-compile and for that
, we will need it it to be compiled using user "super user", is my
understanding correct here?
Or
it will just need the "super user" to create the function for the first
time , but after that the user who has the "execute grant" given (say
app_user) will be able to perform updates and compile to the function body?

CREATE OR REPLACE FUNCTION create_event_trigger_func()
  RETURNS void
  LANGUAGE plpgsql
  SECURITY DEFINER
  AS $$
  BEGIN
EXECUTE 'CREATE EVENT TRIGGER event_trigger_name ON schema_name ...';
  END;
$$;

GRANT EXECUTE ON FUNCTION create_event_trigger_func() TO app_user;


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sun, Apr 21, 2024 at 11:10 AM yudhi s 
wrote:

>
> On Sun, Apr 21, 2024 at 7:55 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Sunday, April 21, 2024, yudhi s  wrote:
>>
>>> On Sun, Apr 21, 2024 at 1:55 PM David G. Johnston <
>>> david.g.johns...@gmail.com> wrote:
>>>
 On Sunday, April 21, 2024, yudhi s  wrote:

> so that it will be able to assign the privilege, so we will be able to
> create the event trigger without need to run the event trigger script from
> super user itself?
>

 Write a security-definer function owned by superuser and grant app_user
 permission to execute it.

 David J.


>>>
>>> Thank You David.
>>>
>>>  Are you saying something like below, in which we first create the
>>> function from super user and then execute the grant? But doesn't that mean,
>>> each time we want to create a new event trigger we have to be again
>>> dependent on the "super user" to modify the security definer function?
>>>
>>
>> Dynamic SQL.  See “execute” in plpgsql.
>>
>> David J.
>>
>>
>
> Even if we create the  event trigger using "security definer" function
> embedding the "create event trigger" with in its body using dynamic
> sql(something as below), and in future if we need to create another event
> trigger , we need to again update the function and re-compile and for that
> , we will need it it to be compiled using user "super user", is my
> understanding correct here?
> Or
> it will just need the "super user" to create the function for the first
> time , but after that the user who has the "execute grant" given (say
> app_user) will be able to perform updates and compile to the function body?
>
> CREATE OR REPLACE FUNCTION create_event_trigger_func()
>   RETURNS void
>   LANGUAGE plpgsql
>   SECURITY DEFINER
>   AS $$
>   BEGIN
> EXECUTE 'CREATE EVENT TRIGGER event_trigger_name ON schema_name ...';
>   END;
> $$;
>
> GRANT EXECUTE ON FUNCTION create_event_trigger_func() TO app_user;
>

If you don't allow the caller to pass in parameters then no, you likely
gain nothing from using a security definer function.  It is a tool and I
don't have enough info or desire to write the internals of said function(s)
for your need.  As Tom says, it very well may be impossible to accomplish
your goal even with a security definer function.  But absent a predefined
role there is no other mechanism for the owners of objects or superusers to
delegate their non-grantable abilities to ordinary users.

David J.


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Sun, Apr 21, 2024 at 8:13 PM Tom Lane  wrote:

> "David G. Johnston"  writes:
> > On Sunday, April 21, 2024, yudhi s  wrote:
> >> Are you saying something like below, in which we first create the
> >> function from super user and then execute the grant? But doesn't that
> mean,
> >> each time we want to create a new event trigger we have to be again
> >> dependent on the "super user" to modify the security definer function?
>
> > Dynamic SQL.  See “execute” in plpgsql.
>
> You might as well just give that user superuser and be done with it.
> It's foolish to imagine that you have any shred of security left
> if you're letting a user that's not 100.00% trusted write event
> triggers.  (Much less execute any SQL command whatsoever, which
> is what it sounds like David is suggesting you create a function
> to do.)
>
>
So do you mean , we should not create the event trigger using the "security
definer" , rather have the super user do this each time we have to create
the event trigger?

Actually , I am not very much aware about the security part, but is it fine
to give the super user privilege to the application user(say app_user) from
which normally scripts/procedures get executed by the application, but
nobody(individual person) can login using that user.

Additionally in other databases, triggers are driven by some
specific privileges (say for example in oracle "create trigger" privilege).
And it doesn't need any super user and we were having many applications in
which the application user (which were used for app to app login) was
having these privileges, similar to "create table" privileges which comes
by default to the schema who owns the objects  etc. So in this case i was
wondering if "event trigger" can cause any additional threat and thus there
is no such privilege like "create trigger" exist in postgres and so it
should be treated cautiously?


Re: error in trigger creation

2024-04-21 Thread David G. Johnston
On Sun, Apr 21, 2024 at 11:20 AM yudhi s 
wrote:

>
> On Sun, Apr 21, 2024 at 8:13 PM Tom Lane  wrote:
>
>> "David G. Johnston"  writes:
>> > On Sunday, April 21, 2024, yudhi s  wrote:
>> >> Are you saying something like below, in which we first create the
>> >> function from super user and then execute the grant? But doesn't that
>> mean,
>> >> each time we want to create a new event trigger we have to be again
>> >> dependent on the "super user" to modify the security definer function?
>>
>> > Dynamic SQL.  See “execute” in plpgsql.
>>
>> You might as well just give that user superuser and be done with it.
>> It's foolish to imagine that you have any shred of security left
>> if you're letting a user that's not 100.00% trusted write event
>> triggers.  (Much less execute any SQL command whatsoever, which
>> is what it sounds like David is suggesting you create a function
>> to do.)
>>
>>
> So do you mean , we should not create the event trigger using the
> "security definer" , rather have the super user do this each time we have
> to create the event trigger?
>

I suggest you share a script that demonstrates exactly what you are trying
to accomplish.  Which event triggers you need to create from the
application and what the functions those triggers call do.


> Actually , I am not very much aware about the security part, but is it
> fine to give the super user privilege to the application user(say app_user)
> from which normally scripts/procedures get executed by the application, but
> nobody(individual person) can login using that user.
>

app_user should not be superuser nor own objects in the database.  The role
that performs schema migrations for the database should be able to become
superuser via set role so when doing migrations if there is a need to do
something as superuser it is possible but explicit.

It is during schema migrations that event triggers are expected to be
installed, not in response to some user hitting your website and having
your middleware execute some SQL while connected as the app_user role.

David J.


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Mon, Apr 22, 2024 at 12:02 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

>
> I suggest you share a script that demonstrates exactly what you are trying
> to accomplish.  Which event triggers you need to create from the
> application and what the functions those triggers call do.
>
>
We are using pg_partman for automatic partition maintenance however as we
have foreign keys created on the tables, so the partition drop from parent
is taking longer as it scans all the partitions of the child table and also
locks the full child table for that duration(even SELECT query not allowed
during that period). So we are thinking of creating foreign keys on
partitions rather than on tables however there is no direct option for that
to happen through pg_partman.

So we are thinking of first creating the table without any foreign keys and
creating the partitions using pg_partman, then create the below event
trigger which will add the foreign key to the new partitions for all new
future partitions. And we are planning to create such an event trigger for
all such child tables that are partitioned and having FK's.

CREATE OR REPLACE FUNCTION add_partition_foreign_key()
RETURNS event_trigger
LANGUAGE plpgsql
AS $$
DECLARE
partition_table TEXT;
parent_table TEXT;
partition_name TEXT;
BEGIN
IF TG_TAG = 'CREATE TABLE' THEN
partition_table := TG_TABLE_NAME;
parent_table := 'parent_table_' || to_char(NEW.partition_key, '_MM_DD');

EXECUTE format('ALTER TABLE %I ADD CONSTRAINT fk_%I_%I FOREIGN KEY
(partition_key, id) REFERENCES %I (partition_key, id)', partition_table,
partition_table, parent_table, parent_table);
END IF;
END;
$$;

CREATE EVENT TRIGGER add_partition_foreign_key_trigger
ON ddl_command_end
WHEN TAG IN ('CREATE TABLE')
EXECUTE FUNCTION add_partition_foreign_key();


Re: error in trigger creation

2024-04-21 Thread Ron Johnson
On Sun, Apr 21, 2024 at 2:58 PM yudhi s  wrote:

> the partition drop from parent is taking longer as it scans all the
> partitions of the child table
>

Does the relevant supporting index exist on the child table?


query multiple schemas

2024-04-21 Thread Norbert Sándor

Hello,

I have a database with multiple tenants with a separate schema for each 
tenant.
The structure of each schema is identical, the tenant ID is the name of 
the schema.


What I would like to achieve is to able to query tables in all schemas 
at once with the tenant IDs added to the result  set.


I experimented with typed solutions like described in 
https://dev.to/staab/how-to-query-multiple-schemas-with-postgresql-28kj 
without much success.

So I turned to a more dynamic JSON-based solution.

Please note that I'm new to plpgsql, so /any/ (even a less related) 
advice is welcome :)


My current experimental function is:

   CREATE OR REPLACE FUNCTION demo.tenant_union_query(tbl anyelement)

   RETURNS setof json AS $func$

   declare

   _select text;

   begin

   _select := (select

   string_agg(

   format('select t.*, %L tenantId from %I.%I t', schema_name,
   schema_name, pg_typeof(tbl)),

   E'\n' || ' union all ' || E'\n')

   from (

   SELECT schema_name

   FROM information_schema.schemata

   where schema_name not in ('information_schema') and schema_name not
   like 'pg_%'

   ) tenants

   );

   return query execute 'select row_to_json(r) from (' || _select || ')
   as r';

   END;

   $func$ LANGUAGE plpgsql;

And this is how I use it to query a "usual" result-set-like result with 
the tenant ID in the 1st column, followed by the fields from the given 
table:


   select r->>'tenantid' tenantId, json_populate_record(null::mytable,
   r) from tenant_union_query(null::mytable) r order by tenantId; -- (1)

The above solution seems to work, my questions are:

1. Is there a better way to achieve the same functionality? Maybe
   without using JSON as an intermediate representation?
2. Is there a way to further simplify the usage, like instead of the
   query (1) above something more simple, like: select * from
   tenant_union_query_2(null::mytable) order by tenantId;

Thanks for your help in advance.
Best regards,
Norbi


Re: error in trigger creation

2024-04-21 Thread yudhi s
On Mon, 22 Apr, 2024, 1:34 am Ron Johnson,  wrote:

> On Sun, Apr 21, 2024 at 2:58 PM yudhi s 
> wrote:
>
>> the partition drop from parent is taking longer as it scans all the
>> partitions of the child table
>>
>
> Does the relevant supporting index exist on the child table?
>

Yes all the child tables have foreign keys indexed.

 Again I don't want to divert the main topic(trigger creation) . I will
provide the exact test case how it puts exclusive lock and runs longer as
scans all child partitions (which must be using indexes though) but as the
number of partitions increase the time of drop partitions increases.

>


Re: query multiple schemas

2024-04-21 Thread Steve Baldwin
Hi Norbi,

If the number of tenant schemas is reasonably static, you could write a
plpgsql function to create a set of UNION ALL views with one view for each
table in all tenant schemas. You could re-run the function each time a
tenant schema is added. Having the set of views would allow you to query
them as you would any of the underlying tables, and the query planner could
likely optimise the query better. With your current function, if you needed
to add a WHERE clause and the underlying tables were large, it would likely
not perform as well as the UNION ALL view.

Cheers,

Steve

On Mon, Apr 22, 2024 at 6:12 AM Norbert Sándor 
wrote:

> Hello,
>
> I have a database with multiple tenants with a separate schema for each
> tenant.
> The structure of each schema is identical, the tenant ID is the name of
> the schema.
>
> What I would like to achieve is to able to query tables in all schemas at
> once with the tenant IDs added to the result  set.
>
> I experimented with typed solutions like described in
> https://dev.to/staab/how-to-query-multiple-schemas-with-postgresql-28kj
> without much success.
> So I turned to a more dynamic JSON-based solution.
>
> Please note that I'm new to plpgsql, so *any* (even a less related)
> advice is welcome :)
>
> My current experimental function is:
>
> CREATE OR REPLACE FUNCTION demo.tenant_union_query(tbl anyelement)
>
> RETURNS setof json AS $func$
>
> declare
>
> _select text;
>
> begin
>
> _select := (select
>
> string_agg(
>
> format('select t.*, %L tenantId from %I.%I t', schema_name, schema_name,
> pg_typeof(tbl)),
>
> E'\n' || ' union all ' || E'\n')
>
> from (
>
> SELECT schema_name
>
> FROM information_schema.schemata
>
> where schema_name not in ('information_schema') and schema_name not like
> 'pg_%'
>
> ) tenants
>
> );
>
> return query execute 'select row_to_json(r) from (' || _select || ') as r'
> ;
>
> END;
>
> $func$ LANGUAGE plpgsql;
>
> And this is how I use it to query a "usual" result-set-like result with
> the tenant ID in the 1st column, followed by the fields from the given
> table:
>
> select r->>'tenantid' tenantId, json_populate_record(null::mytable, r)
> from tenant_union_query(null::mytable) r order by tenantId; -- (1)
>
> The above solution seems to work, my questions are:
>
>1. Is there a better way to achieve the same functionality? Maybe
>without using JSON as an intermediate representation?
>2. Is there a way to further simplify the usage, like instead of the
>query (1) above something more simple, like: select * from
>tenant_union_query_2(null::mytable) order by tenantId;
>
> Thanks for your help in advance.
> Best regards,
> Norbi
>


query multiple schemas

2024-04-21 Thread David G. Johnston
On Sunday, April 21, 2024, Norbert Sándor 
wrote:
>
>
> The structure of each schema is identical, the tenant ID is the name of
> the schema.
>
You’ve hit the main reason why the scheme you choose is usually avoided.
Better to just add tenant_id to your tables in the first place.  And use
partitioned tables if you desire physical separation.

> The above solution seems to work, my questions are:
>
>1. Is there a better way to achieve the same functionality? Maybe
>without using JSON as an intermediate representation?
>
>
In-database, I doubt it (though I didn’t study your specific solution in
depth).  Json provides the easiest way to generate the virtual tables you
need.

Otherwise maybe try something with say bash scripting and psql scripts; or
some other client-side setup where you separate the query and the metadata
lookups so the queries just return normal results and the client takes them
are merges them.

David J.


Re: error in trigger creation

2024-04-21 Thread Adrian Klaver

On 4/21/24 11:20, yudhi s wrote:


On Sun, Apr 21, 2024 at 8:13 PM Tom Lane > wrote:






So do you mean , we should not create the event trigger using the 
"security definer" , rather have the super user do this each time we 
have to create the event trigger?


Actually , I am not very much aware about the security part, but is it 
fine to give the super user privilege to the application user(say 
app_user) from which normally scripts/procedures get executed by the 
application, but nobody(individual person) can login using that user.


Additionally in other databases, triggers are driven by some 
specific privileges (say for example in oracle "create trigger" 
privilege). And it doesn't need any super user and we were having many 


Which Postgres has

https://www.postgresql.org/docs/current/ddl-priv.html

TRIGGER

Allows creation of a trigger on a table, view, etc.


but you are talking about event triggers

https://www.postgresql.org/docs/current/sql-createeventtrigger.html

where

"Only superusers can create event triggers."

To paraphrase Henry Ford, you can have any user for an event trigger as 
long as the user is a superuser.




applications in which the application user (which were used for app to 
app login) was having these privileges, similar to "create table" 
privileges which comes by default to the schema who owns the objects  
etc. So in this case i was wondering if "event trigger" can cause any 
additional threat and thus there is no such privilege like "create 
trigger" exist in postgres and so it should be treated cautiously?


An event trigger runs as a superuser and executes a function that in 
turn can do many things, you do the math on the threat level.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: query multiple schemas

2024-04-21 Thread Tom Lane
Steve Baldwin  writes:
> If the number of tenant schemas is reasonably static, you could write a
> plpgsql function to create a set of UNION ALL views with one view for each
> table in all tenant schemas. You could re-run the function each time a
> tenant schema is added. Having the set of views would allow you to query
> them as you would any of the underlying tables, and the query planner could
> likely optimise the query better. With your current function, if you needed
> to add a WHERE clause and the underlying tables were large, it would likely
> not perform as well as the UNION ALL view.

Another idea is to build a partitioned table comprising all the
individual tables.  This might require the tables to have all the
same owner (not sure about that) and it'd require you to store the
partition key, ie tenant, explicitly in every table which might
seem a bit wasteful.  But it'll likely outperform any other solution
for the union queries.  Notably, adding or deleting a partition would
be much less painful than redefining a UNION ALL view.

regards, tom lane




Re: error in trigger creation

2024-04-21 Thread Tom Lane
Adrian Klaver  writes:
> On 4/21/24 11:20, yudhi s wrote:
>> So in this case i was wondering if "event trigger" can cause any 
>> additional threat and thus there is no such privilege like "create 
>> trigger" exist in postgres and so it should be treated cautiously?

> An event trigger runs as a superuser and executes a function that in 
> turn can do many things, you do the math on the threat level.

As a trivial example: an event trigger could prevent the legitimate
superuser(s) from doing anything at all in that database, just by
blocking all their commands.  This might not even require malicious
intent, merely faulty coding --- but the opportunity for malicious
intent is staggeringly large.

regards, tom lane




CLUSTER vs. VACUUM FULL

2024-04-21 Thread Ron Johnson
PG 14.11 on RHEL8

Why is VACUUM FULL recommended for compressing a table, when CLUSTER does
the same thing (similarly doubling disk space), and apparently runs just as
fast?

My tests:
Table: CDSLBXW.public.log

Time 1  Time 2  Time 3
secssecssecs
VACUUM FULL 44.239.342.3
CLUSTER 41.738.941.3

CDSLBXW=# \d public.log
 Table "public.log"
 Column  |Type | Collation | Nullable |
  Default
-+-+---+--+-
 log_id  | bigint  |   | not null |
nextval('log_log_id_seq'::regclass)
 level   | numeric(10,0)   |   |  |
 source  | character varying(255)  |   |  |
 username| character varying(255)  |   |  |
 user_login_id   | character varying(255)  |   |  |
 user_ip_address | character varying(255)  |   |  |
 computer| character varying(255)  |   |  |
 search_tag  | character varying(4000) |   |  |
 log_group_id| integer |   |  |
 created_on  | timestamp without time zone |   | not null |
 created_by  | integer |   |  |
 xml_detail  | bytea   |   |  |
Indexes:
"pk_log" PRIMARY KEY, btree (log_id)
"idx_log_attr_source" btree (source)
"idx_log_level" btree (level)
"idx_log_search_tag" btree (search_tag)

CDSLBXW=# SELECT COUNT(*) FROM public.log;
  count
-
 6774664
(1 row)

CDSLBXW=# \timing
CDSLBXW=# vacuum (full) public.log;
VACUUM
Time: 44190.799 ms (00:44.191)
CDSLBXW=#
CDSLBXW=# vacuum (full) public.log;
VACUUM
Time: 39316.824 ms (00:39.317)
CDSLBXW=# vacuum (full) public.log;
VACUUM
Time: 42336.268 ms (00:42.336)

CDSLBXW=# CLUSTER public.log USING pk_log;
CLUSTER
Time: 41722.335 ms (00:41.722)
CDSLBXW=#
CDSLBXW=# CLUSTER public.log USING pk_log;
CLUSTER
Time: 38915.128 ms (00:38.915)
CDSLBXW=#
CDSLBXW=# CLUSTER public.log USING pk_log;
CLUSTER
Time: 41342.651 ms (00:41.343)


Re: CLUSTER vs. VACUUM FULL

2024-04-21 Thread Tom Lane
Ron Johnson  writes:
> Why is VACUUM FULL recommended for compressing a table, when CLUSTER does
> the same thing (similarly doubling disk space), and apparently runs just as
> fast?

CLUSTER makes the additional effort to sort the data per the ordering
of the specified index.  I'm surprised that's not noticeable in your
test case.

regards, tom lane




Re: CLUSTER vs. VACUUM FULL

2024-04-21 Thread Ron Johnson
On Sun, Apr 21, 2024 at 6:45 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > Why is VACUUM FULL recommended for compressing a table, when CLUSTER does
> > the same thing (similarly doubling disk space), and apparently runs just
> as
> > fast?
>
> CLUSTER makes the additional effort to sort the data per the ordering
> of the specified index.  I'm surprised that's not noticeable in your
> test case.
>

It's in a freshly restored database.  However, regular deletions of old
records, and normal vacuums would have led me to expect that the "fresh"
public.log would have been in relatively random order.


Re: CLUSTER vs. VACUUM FULL

2024-04-21 Thread Ron Johnson
On Sun, Apr 21, 2024 at 6:45 PM Tom Lane  wrote:

> Ron Johnson  writes:
> > Why is VACUUM FULL recommended for compressing a table, when CLUSTER does
> > the same thing (similarly doubling disk space), and apparently runs just
> as
> > fast?
>
> CLUSTER makes the additional effort to sort the data per the ordering
> of the specified index.  I'm surprised that's not noticeable in your
> test case.
>

Clustering on a completely different index  was also 44 seconds.


Re: error in trigger creation

2024-04-21 Thread Adrian Klaver

On 4/21/24 14:21, Tom Lane wrote:

Adrian Klaver  writes:

On 4/21/24 11:20, yudhi s wrote:

So in this case i was wondering if "event trigger" can cause any
additional threat and thus there is no such privilege like "create
trigger" exist in postgres and so it should be treated cautiously?



An event trigger runs as a superuser and executes a function that in
turn can do many things, you do the math on the threat level.


As a trivial example: an event trigger could prevent the legitimate
superuser(s) from doing anything at all in that database, just by
blocking all their commands.  This might not even require malicious
intent, merely faulty coding --- but the opportunity for malicious
intent is staggeringly large.


As an FYI to above:

https://www.postgresql.org/docs/current/sql-createeventtrigger.html

"Event triggers are disabled in single-user mode (see postgres). If an 
erroneous event trigger disables the database so much that you can't 
even drop the trigger, restart in single-user mode and you'll be able to 
do that."





regards, tom lane


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: CLUSTER vs. VACUUM FULL

2024-04-21 Thread David Rowley
On Mon, 22 Apr 2024 at 12:16, Ron Johnson  wrote:
>
> On Sun, Apr 21, 2024 at 6:45 PM Tom Lane  wrote:
>>
>> Ron Johnson  writes:
>> > Why is VACUUM FULL recommended for compressing a table, when CLUSTER does
>> > the same thing (similarly doubling disk space), and apparently runs just as
>> > fast?
>>
>> CLUSTER makes the additional effort to sort the data per the ordering
>> of the specified index.  I'm surprised that's not noticeable in your
>> test case.
>
> Clustering on a completely different index  was also 44 seconds.

Both VACUUM FULL and CLUSTER go through a very similar code path. Both
use cluster_rel().  VACUUM FULL just won't make use of an existing
index to provide presorted input or perform a sort, whereas CLUSTER
will attempt to choose the cheapest out of these two to get sorted
results.

If the timing for each is similar, it just means that using an index
scan or sorting isn't very expensive compared to the other work that's
being done.  Both CLUSTER and VACUUM FULL require reading every heap
page and writing out new pages into a new heap and maintaining  all
indexes on the new heap. That's quite an effort.

To satisfy your curiosity, you could always run some EXPLAIN ANALYZE
SELECT queries to measure how much time was spent sorting the entire
table. You'd have to set work_mem to the value of
maintenance_work_mem.

David




Re: CLUSTER vs. VACUUM FULL

2024-04-21 Thread Ron Johnson
On Sun, Apr 21, 2024 at 9:35 PM David Rowley  wrote:

> On Mon, 22 Apr 2024 at 12:16, Ron Johnson  wrote:
> >
> > On Sun, Apr 21, 2024 at 6:45 PM Tom Lane  wrote:
> >>
> >> Ron Johnson  writes:
> >> > Why is VACUUM FULL recommended for compressing a table, when CLUSTER
> does
> >> > the same thing (similarly doubling disk space), and apparently runs
> just as
> >> > fast?
> >>
> >> CLUSTER makes the additional effort to sort the data per the ordering
> >> of the specified index.  I'm surprised that's not noticeable in your
> >> test case.
> >
> > Clustering on a completely different index  was also 44 seconds.
>
> Both VACUUM FULL and CLUSTER go through a very similar code path. Both
> use cluster_rel().  VACUUM FULL just won't make use of an existing
> index to provide presorted input or perform a sort, whereas CLUSTER
> will attempt to choose the cheapest out of these two to get sorted
> results.
>
> If the timing for each is similar, it just means that using an index
> scan or sorting isn't very expensive compared to the other work that's
> being done.  Both CLUSTER and VACUUM FULL require reading every heap
> page and writing out new pages into a new heap and maintaining  all
> indexes on the new heap. That's quite an effort.
>

My original CLUSTER command didn't have to change the order of the data
very much, thus, the sort didn't have to do much work.

CLUSTER on a different index was indeed much slower than VACUUM FULL.


Missing PG_MODULE_MAGIC error

2024-04-21 Thread Abhishek Chanda
Hi all,

I am trying to test an extension that links to rocksdb. When I try to load
it in postgres, it complains "missing magic block" while I do
have PG_MODULE_MAGIC in my src. Here is a minimal repro

// pgtam.c
#include "postgres.h"
#include 
#include "fmgr.h"
#include "access/tableam.h"
#include "access/heapam.h"
#include "nodes/execnodes.h"
#include "catalog/index.h"
#include "commands/vacuum.h"
#include "utils/builtins.h"
#include "executor/tuptable.h"

#include "rocksdb/c.h"

PG_MODULE_MAGIC;

// Makefile
MODULE_big = pgtam
EXTENSION = pgtam
DATA = pgtam--0.0.1.sql

PG_CPPFLAGS += -L/usr/local/lib -g
SHLIB_LINK += -L/usr/local/lib -lrocksdb

PG_CONFIG = /usr/local/pgsql/bin/pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)

// extension build
╰─$ make clean && make && sudo make install
rm -f pgtam.dylib   libpgtam.a  libpgtam.pc
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Werror=vla
-Werror=unguarded-availability-new -Wendif-labels
-Wmissing-format-attribute -Wcast-function-type -Wformat-security
-fno-strict-aliasing -fwrapv -Wno-unused-command-line-argument
-Wno-compound-token-split-by-macro -g -ggdb -Og -g3 -fno-omit-frame-pointer
 -fvisibility=hidden -bundle -o pgtam.dylib  -L/usr/local/pgsql/lib
 -isysroot
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.4.sdk
  -Wl,-dead_strip_dylibs   -fvisibility=hidden -L/usr/local/lib -lrocksdb
-bundle_loader /usr/local/pgsql/bin/postgres
/opt/homebrew/bin/gmkdir -p '/usr/local/pgsql/lib'
/opt/homebrew/bin/gmkdir -p '/usr/local/pgsql/share/extension'
/opt/homebrew/bin/gmkdir -p '/usr/local/pgsql/share/extension'
/opt/homebrew/bin/ginstall -c -m 755  pgtam.dylib
'/usr/local/pgsql/lib/pgtam.dylib'
/opt/homebrew/bin/ginstall -c -m 644 .//pgtam.control
'/usr/local/pgsql/share/extension/'
/opt/homebrew/bin/ginstall -c -m 644 .//pgtam--0.0.1.sql
 '/usr/local/pgsql/share/extension/'

Postgres build: cloned and built at SHA
849172ff4883d44168f96f39d3fde96d0aa34c99

╰─$ /usr/local/pgsql/bin/pg_ctl -D test-db -l logfile restart
waiting for server to shut down done
server stopped
waiting for server to start done
server started

╭─~/src/postgres ‹849172ff48●›
╰─$ /usr/local/pgsql/bin/psql postgres
psql (17devel)
Type "help" for help.

postgres=# CREATE EXTENSION pgtam;
ERROR:  incompatible library "/usr/local/pgsql/lib/pgtam.dylib": missing
magic block
HINT:  Extension libraries are required to use the PG_MODULE_MAGIC macro.

What am I missing?


Re: Not able to grant access on pg_signal_backend on azure flexible server

2024-04-21 Thread Saksham Joshi
Hi,
I am afraid that's not the case with postgresql 16 since we also tested
with postgresql 15 we are able to run this command( "GRANT
pg_signal_backend To "our_admin_user") successfully with our admin user but
that's not the case with postgresql 16 we keep getting the error message
'permission denied to grant role "pg_signal_backend".Also,In postgre 16
only the current users of the database are able to close the
connections.Can you help on this please?

On Fri, 19 Apr 2024, 19:50 Adrian Klaver,  wrote:

> On 4/18/24 22:48, Saksham Joshi wrote:
> > Hi,
> > We have created an azure postgresql flexible server and we have added an
> > ad admin as a user and Created our database using this admin
> > user.However,When are running this command: 'Grant pg_signal_backend To
> > adminUser' we are getting an error that says 'permission denied to grant
> > role "pg_signal_backend".While this is strange the admin user is infact
> > the owner of the said database and we don't have any other user that
> > have the said privileges.
>
>
> https://www.postgresql.org/docs/current/predefined-roles.html
>
> "PostgreSQL provides a set of predefined roles that provide access to
> certain, commonly needed, privileged capabilities and information.
> Administrators (including roles that have the CREATEROLE privilege) can
> GRANT these roles to users and/or other roles in their environment,
> providing those users with access to the specified capabilities and
> information."
>
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Not able to grant access on pg_signal_backend on azure flexible server

2024-04-21 Thread David G. Johnston
On Sunday, April 21, 2024, Saksham Joshi  wrote:

> Hi,
> I am afraid that's not the case with postgresql 16 since we also tested
> with postgresql 15 we are able to run this command( "GRANT
> pg_signal_backend To "our_admin_user") successfully with our admin user but
> that's not the case with postgresql 16 we keep getting the error message
> 'permission denied to grant role "pg_signal_backend".
>

Probably you haven’t granted pg_signal_backend to whichever role you are
executing the above grant command with.  One cannot grant what one does not
have themselves, with the admin option.

David J.


Re: Not able to grant access on pg_signal_backend on azure flexible server

2024-04-21 Thread Saksham Joshi
Hi David,
We have tried granting it with our admin user also but no avail.

On Mon, 22 Apr 2024, 10:50 David G. Johnston, 
wrote:

> On Sunday, April 21, 2024, Saksham Joshi  wrote:
>
>> Hi,
>> I am afraid that's not the case with postgresql 16 since we also tested
>> with postgresql 15 we are able to run this command( "GRANT
>> pg_signal_backend To "our_admin_user") successfully with our admin user but
>> that's not the case with postgresql 16 we keep getting the error message
>> 'permission denied to grant role "pg_signal_backend".
>>
>
> Probably you haven’t granted pg_signal_backend to whichever role you are
> executing the above grant command with.  One cannot grant what one does not
> have themselves, with the admin option.
>
> David J.
>
>


Re: Not able to grant access on pg_signal_backend on azure flexible server

2024-04-21 Thread David G. Johnston
On Sun, Apr 21, 2024 at 10:49 PM Saksham Joshi 
wrote:

> We have tried granting it with our admin user also but no avail.
>

This is how v16+ works in a community installation:

postgres=1 # create role cr createrole;
CREATE ROLE

postgres 2=# set role cr;
SET

postgres 2=> create role otherrole;
CREATE ROLE
postgres 2=> grant pg_signal_backend to otherrole;
ERROR:  permission denied to grant role "pg_signal_backend"
DETAIL:  Only roles with the ADMIN option on role "pg_signal_backend" may
grant this role.

postgres 1=# grant pg_signal_backend to cr with admin option;
GRANT ROLE

postgres 2=> grant pg_signal_backend to otherrole;
GRANT ROLE

David J.