show primary_conninfo unchanged after promotion?
Hi! i might be doing something wrong, but I cannot see what... Server is 14.3 (Debian 14.3-1.pgdg110+1) (at the time of writing, in Dockerfile FROM postgres:14) I have a 3 node cluster, starting with pg_red (172.26.0.2) as primary. I promote pg_green (172.26.0.4) as new primary, all other aligned. Now, on pg_green (172.26.0.4): =# select inet_server_addr(); inet_server_addr -- 172.26.0.4 =# select pg_is_in_recovery(); pg_is_in_recovery --- f =# select * from pg_stat_replication; (CUT) pid | usesysid | usename | application_name | client_addr -+--+-+--+- 486 | 16,384 | rep | walreceiver | 172.21.0.2 625 | 16,384 | rep | walreceiver | 172.21.0.4 But, if I execute show primary_conninfo, i get =# show primary_conninfo ; primary_conninfo user=rep passfile='/.pgpass' channel_binding=prefer host=pg_red port=5432 sslmode=prefer sslcompression=0 sslsni=1 ssl_min_protocol_version=TLSv1.2 g. .ssencmode=prefer krbsrvname=postgres target_session_attrs=any which says primary is pg_red, which is the old primary. What is that I am doing wrong?!? Tnx!
Feature request: psql --idle
Since changing ph_hda.conf file to give users access involves the restart of server, many companies I work(ed) use a bastion host, where users ssh to, and are allowed "somehow" use postgresql. Still, those users need some login shell. It would be great to give them psql as a login shell (in /etc/passwd). But doing so, will result in psql exiting with error with the usual $ psql psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket? What would help, is a --idle option, where psql does not exit, stays idle and waits for user to give a \conn command. Something similar to sqlplus /nolog Is anything like that feasible or is there another solution/workaround? Tnx!
Re: Feature request: psql --idle
Thanks all for the alternative solutions. Yet, despite of my (wrong and misleading) use-case, I still share Julien's view of this being useful, whatever the parameter or the use-case. My 2 cents :-) On Wed, Jul 27, 2022 at 2:49 PM Wiwwo Staff wrote: > Since changing ph_hda.conf file to give users access involves the restart > of server, many companies I work(ed) use a bastion host, where users ssh > to, and are allowed "somehow" use postgresql. > > Still, those users need some login shell. > > It would be great to give them psql as a login shell (in /etc/passwd). > But doing so, will result in psql exiting with error with the usual > $ psql > psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: > No such file or directory > Is the server running locally and accepting connections on that socket? > > What would help, is a --idle option, where psql does not exit, stays idle > and waits for user to give a \conn command. > Something similar to > sqlplus /nolog > > Is anything like that feasible or is there another solution/workaround? > > Tnx! >
Feature request(?): Proxy User
Sorry to post solutions and links about alternative DBMSs, but I miss this sort of "sudo" ad database user level: https://oracle-base.com/articles/misc/proxy-users-and-connect-through Is there any approach to achieve the same result in Postgresql? Anyone sharing the need of this functionality? Tnx!
Allow user to connect to replicas only
Hi! Is there any way to create a user allowed to connect to a/any read replica only, as in "not allowed to connect to primary"?
Monitoring-only role
Hi there! Is there (a way to have) a "monitoring only" role in PG? This role should have access to stat views only,maybe something more, but no access to data whatsoever. Tnx!
Disallow execution of shell commands from psql
Hi! Happy new (gregorian calendar) year! Somehow related to the proposal of having a `psql --idle` option, is there a way to disallow the command `\!` (and anything of the likes in psql? Sure, I can set the SHELL env var at run-time, but I still want to have postgres user to be a normal user, with its shell etc, which means it can change this SHELL setting somewhere. Any tip?
Tablespace OID, database OID, relfilenode
Hi! Quick and to the point: If i execute select pg_relation_filepath('pgbench_accounts'); I get pg_relation_filepath -- base/5/16398 where "base" is tablespace, but not tablespace OID, nor tablespace name. How do I link it to get from "base" the tablespace OID? Or, in other words, given table pgbench_accounts, how do I get tablespace OID, database OID and relfilenode so that I can use this info to filter output of pg_waldump? Thanks a ton!
Re: Tablespace OID, database OID, relfilenode
Hi Laurenz, thanks for your answer! Ok, but I don't think "1663" is a sort of constant value for "base", isn't it? Let me clarify my question: how do I get, with a query, from "base" to 1663 (or whatever the value is)? On Tue, Jan 17, 2023 at 4:01 PM Laurenz Albe wrote: > On Tue, 2023-01-17 at 15:48 +0100, Wiwwo Staff wrote: > > Quick and to the point: > > > > If i execute > > select pg_relation_filepath('pgbench_accounts'); > > > > I get > > pg_relation_filepath > > -- > > base/5/16398 > > > > where "base" is tablespace, but not tablespace OID, nor tablespace name. > > How do I link it to get from "base" the tablespace OID? > > > > Or, in other words, given table pgbench_accounts, how do I get > tablespace OID, > > database OID and relfilenode so that I can use this info to filter > output of pg_waldump? > > The tablespace OID is 1663, since "base" is the default tablespace > "pg_default". > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com >
Re: Tablespace OID, database OID, relfilenode
Thank you Adrian, thank you David! So, it turns out, it *is* indeed an arbitrary constant... I have been answered many times RTFM, it is actually the first time I have being answered RTF source code. (I searched for 1663 in PG docs, it is there, although not really "intuitive"). Allow me to humbly propose to maybe at least write somewhere in the DOCs in a clear way: "This is a number you should know, and it is arbitrary, and in some functions it will be represented as 'base' ". It is fundamental to know this when one uses pg_waldump, for example. Not to start a flame war here, just humbly proposing... Thanks all again! On Tue, Jan 17, 2023 at 4:44 PM Adrian Klaver wrote: > On 1/17/23 07:35, Wiwwo Staff wrote: > > Hi Laurenz, > > thanks for your answer! > > Ok, but I don't think "1663" is a sort of constant value for "base", > > isn't it? > > From source: > > backend/catalog/postgres.bki > > open pg_tablespace > insert ( 1663 pg_default 10 _null_ _null_ ) > insert ( 1664 pg_global 10 _null_ _null_ ) > > > > > Let me clarify my question: how do I get, with a query, from "base" to > > 1663 (or whatever the value is)? > > select * from pg_tablespace ; > oid | spcname | spcowner | spcacl | spcoptions > --++--++ > 1663 | pg_default | 10 | NULL | NULL > 1664 | pg_global | 10 | NULL | NULL > > > > > On Tue, Jan 17, 2023 at 4:01 PM Laurenz Albe > <mailto:laurenz.a...@cybertec.at>> wrote: > > > > On Tue, 2023-01-17 at 15:48 +0100, Wiwwo Staff wrote: > > > Quick and to the point: > > > > > > If i execute > > > select pg_relation_filepath('pgbench_accounts'); > > > > > > I get > > > pg_relation_filepath > > > -- > > > base/5/16398 > > > > > > where "base" is tablespace, but not tablespace OID, nor > > tablespace name. > > > How do I link it to get from "base" the tablespace OID? > > > > > > Or, in other words, given table pgbench_accounts, how do I get > > tablespace OID, > > > database OID and relfilenode so that I can use this info to > > filter output of pg_waldump? > > > > The tablespace OID is 1663, since "base" is the default tablespace > > "pg_default". > > > > Yours, > > Laurenz Albe > > -- > > Cybertec | https://www.cybertec-postgresql.com > > <https://www.cybertec-postgresql.com> > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com > >
pg_rewind and user / passfile
Hi! I have noticed, if I use pg_basebackup -D $PGDATA -h $NEW_PRIMARY -U $SOME_ADMIN_USER (where $SOME_ADMIN_USER is a privileged user, not the replica user) i got a "show primary_conninfo" as ; user= passfile='/var/lib/postgresql/.pgpass' channel_binding=prefer host=pg_red port=5432 sslmode=prefer sslcompression=0 sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any Which is cool, since the user is the correct replica user (not the one I specified at command line), AND more importantly, there is no pasword there. If instead, on a old primary, I perform a pg_rewind, the primary_conninfo is user= password= channel_binding=prefer host=pg_blue port=5432 sslmode=prefer sslcompression=0 sslsni=1 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any If I user the replica-specific user, i get ERROR: permission denied for function pg_read_binary_file Question (and at the point of this mail): * is there a way to tell pg_rewind to use the passfile? * Am I doing something wrong? Thank!
Re: pg_rewind and user / passfile
Sorry for the confusion, I must have done some crazy stuff about the user of pg_basebackup. Please just consider the question: * is there a way to tell pg_rewind to use the passfile? Thanks! On Wed, Jan 25, 2023 at 10:37 AM Wiwwo Staff wrote: > Hi! > I have noticed, if I use > pg_basebackup -D $PGDATA -h $NEW_PRIMARY -U $SOME_ADMIN_USER > (where $SOME_ADMIN_USER is a privileged user, not the replica user) > i got a "show primary_conninfo" as ; > user= passfile='/var/lib/postgresql/.pgpass' > channel_binding=prefer host=pg_red port=5432 sslmode=prefer > sslcompression=0 sslsni=1 ssl_min_protocol_version=TLSv1.2 > gssencmode=prefer krbsrvname=postgres target_session_attrs=any > > Which is cool, since the user is the correct replica user (not the one I > specified at command line), AND more importantly, there is no pasword there. > > > If instead, on a old primary, I perform a pg_rewind, the primary_conninfo > is >user= password= >channel_binding=prefer host=pg_blue port=5432 >sslmode=prefer sslcompression=0 sslsni=1 > ssl_min_protocol_version=TLSv1.2 > gssencmode=prefer krbsrvname=postgres target_session_attrs=any > If I user the replica-specific user, i get > ERROR: permission denied for function pg_read_binary_file > > Question (and at the point of this mail): > * is there a way to tell pg_rewind to use the passfile? > * Am I doing something wrong? > > Thank! > >
pg_rewind and replication user
Hi! Provided my replication user created with CREATE USER repl_user REPLICATION LOGIN ENCRYPTED PASSWORD''; If I run pg_rewing referring to this user postgres@host1:~: pg_rewind -D $PGDATA --source-server="host=nre_primary port=5432 user=repl_user passfile='/var/lib/postgresql/.pgpass' dbname=postgres" -P -R I get pg_rewind: connected to server pg_rewind: error: could not fetch remote file "global/pg_control": ERROR: permission denied for function pg_read_binary_file My question/doubt is: i can need to run pg_rewind referring to a superuser only? Or i am "just" forgetting some grant to the replication user? Thanks!
FOR UPDATE SKIP LOCKED and get locked row/avoid updating other row(s)
Hi! I am implementing a queue using PostgreSQL. I am of course using "FOR UPDATE SKIP LOCKED". Is there any way I can tell PostgreSQL to only "operate" on the locked row, and/or a way to reference it? Some explanations of what I mean: - I have a table with N rows - I lock row X with a PG Function, in transaction 1 - I do something in my code, using transaction 2 - I update the status of row X with a PG Function, in transaction 1 In last step, I update the row X status, passing my function the ID of this row X. But, nothing stops me from updating row Y in the queue table, for whatever reason. My question again: any way to force a "Just update the row X you locked before, and nothing else/more?" Thanks!
Re: FOR UPDATE SKIP LOCKED and get locked row/avoid updating other row(s)
On Tue, 13 Feb 2024 at 14:49, David G. Johnston wrote: > On Tuesday, February 13, 2024, Wiwwo Staff wrote: > >> Hi! >> I am implementing a queue using PostgreSQL. >> I am of course using "FOR UPDATE SKIP LOCKED". >> >> Is there any way I can tell PostgreSQL to only "operate" on the locked >> row, and/or a way to reference it? >> >> Some explanations of what I mean: >> >>- I have a table with N rows >>- I lock row X with a PG Function, in transaction 1 >>- I do something in my code, using transaction 2 >>- I update the status of row X with a PG Function, in transaction 1 >> >> In last step, I update the row X status, passing my function the ID of >> this row X. >> But, nothing stops me from updating row Y in the queue table, for >> whatever reason. >> >> My question again: any way to force a "Just update the row X you locked >> before, and nothing else/more?" >> >> > Holding locks and open transactions while doing queue job processing is > generally not a good idea anyway so the lack of this ability doesn’t seem > too problematic - but a cursor can probably get you close You can abstract > interactions with the queue table through a functional API to implement a > server-enforced policy, removing the ability for clients to do arbitrary > queries on the underlying tables. The checkout function can tag who got > the job and the completion function can validate the input arguments > supplied by the client belong to a job they checked out. > > David J. > > Hi David, thanks for your answer. You are absolutely right, it is not a good idea. The reasoning behind is 1) everything is Python controlled and 2) -more importantly- the DB user in charge of consuming the queue has grants just ion that part, and absolutely nothing else. Hence the 2 sessions and the process separation. Cosimo
Performance (and general) considerations between views and functions
Hi Community! I am in the process of re-writing SQLs in app code to a more DB side approach. My first idea was to write views, and let the app query those views; i'd need to rewrite some queries to expose the "parameters" (where condition fields) of those queries in the select part, to expose them and allow app to filter on them. Now a huge issue. But, i might also create functions (language sql) returning "RETURNS TABLE", and pass the where conditions as parameter; i would have to bite the bullet and write all the "RETURNS TABLE (... ... ...)" definitions, but meh. :-D My question would be: in terms of performance, the select on the function looks way faster, but I am afraid PG optimized does not really "understands" the complexity of the query. For example, the explain on the view understands that the query returns always 1 row; the explain on the proc. does not. On the other side, functions can be tuned with many many "hints" (COST, ROWS, etc...) In my shoes, what would you do? Thanks a ton!
HISTIGNORE in psql
Hi! As a big history lover, it would be great to have the HISTIGNORE functionality added to psql. Have a great day! Wiwwo
Re: HISTIGNORE in psql
Hi Adrian, yes, of course, that is what I meant: being able to tell psql to exclude some patterns from being stored in the .psql_history file On Tue, 18 Jun 2024 at 16:12, Adrian Klaver wrote: > On 6/17/24 23:45, Wiwwo Staff wrote: > > Hi! > > As a big history lover, it would be great to have the HISTIGNORE > > functionality added to psql. > > HISTIGNORE is a BASH feature and psql runs in more then the BASH shell. > > Do you mean you would like a clone of that feature added to psql? > > Also psql uses either readline or libedit so it would need to work with > those. > > > > > Have a great day! > > Wiwwo > > > > > > -- > Adrian Klaver > adrian.kla...@aklaver.com > >
Check psql parameter is passed, if not ask for it
Hi! I want to check if I pass the parameter "param1" to a sql script "delme.sql" like ``` \if ':param1' \echo :param1; \else \prompt 'Param1? ' :param1 \endif select :'param1'; ``` if parameter `param1` is not passed, the .sql should ask for it. If I run something like `psql -f delme.sql -v param1=1` it works, but if i pass `psql -f delme.sql -v param1=meh` i receive `psql:delme.sql:1: error: unrecognized value ":param1" for "\if expression": Boolean expected` How can I implement that? TIA, Wiwwo
On partitioning, PKs and FKs
Hi! I have a big table bigTable which I partitioned by hash on field columnX, by creating bigTable_0, bigTable_1 etc. Since I need a PK on bigTable.id, and table is not partitioned by id, and columnX is not unique, i added PK on bigTable_0.id, bigTable_1.id etc. So far, so good. Now I have anotherTable, which has column bigTable_id referencing bigTable.id. Creating FK anotherTable__bigTable_FK on anotherTable.bigTable_id -you guess :D- fails with > [42830] ERROR: there is no unique constraint matching given keys for > referenced table "bigTable" > Also creating a FK pointing to one of the hash tables bigTable_0.id, bigTable_1.id etc. also fails with > ERROR: 23503: insert or update on table "anotherTable" violates foreign > key constraint "anotherTable__bigTable_FK". > Do I have a way out? :)
Re: On partitioning, PKs and FKs
Big big variability on partitioned column, which is filtered (where condition) in (almost) all queries. On Thu, 8 Jul 2021 at 14:23, Michael Lewis wrote: > Why are you using hash partitioning? >
Re: On partitioning, PKs and FKs
On Thu, 8 Jul 2021 at 21:42, Alban Hertroys wrote: > On 2021-07-08 13:30, Ron wrote: > > Thus, the bigTable PK must be on id, columnX, (No, I don't like it > > either.) > > That's not entirely true. You can keep the PK on id if you additionally > create a unique constraint on (id, columnX). > Uhm, but that means that i have to partition by id *and* columnX, which is not what I really want... => create table bigtable ( id numeric primary key, columnX varchar(5), constraint test_unique unique (id, columnX) ) partition by hash (columnx); ERROR: 0A000: unique constraint on partitioned table must include all partitioning columns DETAIL: PRIMARY KEY constraint on table "bigtable" lacks column "columnx" which is part of the partition key. LOCATION: DefineIndex, indexcmds.c:946 Time: 146.254 ms Also ) partition by hash (id, columnx); ERROR: 0A000: unique constraint on partitioned table must include all partitioning columns DETAIL: PRIMARY KEY constraint on table "bigtable" lacks column "columnx" which is part of the partition key. LOCATION: DefineIndex, indexcmds.c:946 Time: 150.646 ms and ) partition by hash (id); CREATE TABLE Time: 152.780 ms which is not what I want...
On enforcing default column value, AKA "Bloody nulls"
Hi all! In a scenario like this: > =# create table tab1(text1 text default 'from table'); > > =# create procedure ins_tab1(p_text1 text default 'from proc') language sql as > -# $$ > $# insert into tab1(text1) values (p_text1); > $# $$; > > =# call ins_tab1 (null); > > =# select * from tab1 ; > text1 > > [nUlL] Is there a way (apart from triggers) to force at least one of the default values? Thanks!