Re: Psycopg3 fails to resolve 'timezone localtime' on MacOS
Jerry: On Fri, 6 May 2022 at 03:04, Jerry Sievers wrote: > Has anyone run into This? > Psycopg3 fails to resolve timezone localtime on MacOS (Catalina). > > It falls back to UTC regardless of whether running with/without the > tzdata package which I did try. > > There is a /etc/localtime symlink on this box pointed at the correct > file in the tz data directory, which apparently doesn't get used. > > Adding a temporary symlink directly into the tz data directory got > things working but I'm skeptical of such a hack. I'm not using Mac, but Linux, and not python, but I think you have some concepts mixed. In Linux, and it seems Mac is pretty similar, to define a timezone you put something readable in tzfile(5) format ( i.e. a file or a symlink to a file as you did ) in /usr/share/zoneinfo ( this is the "tzdata directory" ). Then, to set the system default timezone you copy ( or link ) a tzfile in /etc/localtime. Note this DOES NOT define a named timezone that you can use. And there is NOT a timezone named "localtime". Normaly you use a symbolic link from etc/localtime to the timezone directory, which has the nice side effect of letting you read the link to know the name, but it does not have to be. I think this is leading you into thinking "localtime" is a valid name, which is not, or was not until you defined it but creating the name by adding the temporary symlink. If you want to have a zone named "localtime" pointing to the default zone, adding a symlink in the tzdata dir named "localtime" ( directory entries is what define the names ) pointint to /etc/localtime will do it, but I would not recommend it. Your errors are pretty extrange. Are you exporting TZ or whichever env var MacOs uses or a similar thing set to "localtime" in any of the relevant environments ( i.e., the server or client process ), or setting the server timezone to "localtime"? Because normally when you want "localtime" what you want is the process system default timezone, so you do not set anything in the process ($TZ) included, and let the library fallback to the system default timezone ( unnamed ). This seems like some configuration problem. Framcisco Olarte.
Re: Replication with Patroni not working after killing secondary and starting again
> Does https://patroni.readthedocs.io/en/latest/replication_modes.html help? Thanks. I have found the same meanwhile. The effects I experienced were caused by the fact that Patroni configures async replication by default. After changing it to sync everything worked as expected
"A transaction cannot be ended inside a block with exception handlers."
The PG doc section 43.8. Transaction Management: https://www.postgresql.org/docs/current/plpgsql-transactions.html says "A transaction cannot be ended inside a block with exception handlers." It's easy to demonstrate the restriction by adding this just before the final "end;" in the simple example at the start of the section: exception when invalid_transaction_termination then raise info 'invalid_transaction_termination caught'; The procedure now terminates with "invalid_transaction_termination caught". The "Transaction Management" section is new, in the "PL/pgSQL - SQL Procedural Language" chapter, in Version 11. The caveat is there. And it's been there ever since. I'd been hoping that the restriction would have been lifted by version 14. I want to demonstrate how to meet this requirement: « Encapsulate each business function in a user-defined subprogram that hides all the implementation details like table names and the SQL statements that manipulate their contents so the they cannot be seen using SQL issued from the client. Further, don't allow raw errors to escape to the client. Rather, if an expected error occurs (like a unique key violation), then report this as an application-specific code that translates to, for example, "This nickname is already taken. Choose a different one." And if an "others" error occurs (typically because the programmer forgot to cater for it—like a too-wide varchar value) insert diagnostic info into an incident log table and return an "unexpected error" application-specific code together with the incident ID so that it can be reported to Support. » I've written proof-of-concept code that shows how to meet this requirement for most scenarios. But it seems to be impossible to meet the requirement for errors that occur at commit time. Such errors occur, for example, when two "serializable" sessions contend. It's easy to provoke a commit-time error in a single session demo with an "initially deferred" constraint trigger. (See the code at the end.) Another example would be an "initially deferred" FK constraint where an application code bug occasionally fails to meet the required conditions before the commit. Is there simply no way that inserts into table "t" in my example can be encapsulated in PL/pgSQL so that the error from the failing trigger can be handled rather there than escaping, raw, to the client? -- "initially deferred" constraint trigger demo create table t(k int primary key); create function trg_fn() returns trigger language plpgsql as $body$ declare bad constant boolean not null := (select exists(select 1 from t where k = 42)); begin -- Uncomment to observe when trg fires. -- raise info 'trg fired'; if bad then assert 'false', 'tigger trg caused exception'; end if; return null; end; $body$; create constraint trigger trg after insert on t initially deferred for each row execute function trg_fn(); -- Do by hand, statement by statement. -- OK -- Notice that "trg" fire, five times, at "commit" time. start transaction; insert into t(k) values(1), (2), (3), (4), (5); commit; -- "ERROR: tigger trg caused exception" occurs at "commit" time. start transaction; insert into t(k) values(42); commit;
Re: "A transaction cannot be ended inside a block with exception handlers."
On Fri, May 6, 2022 at 4:40 PM Bryn Llewellyn wrote: > > Is there simply no way that inserts into table "t" in my example can be > encapsulated in PL/pgSQL so that the error from the failing trigger can be > handled rather there than escaping, raw, to the client? > > Any potential solution to this problem will involve writing a stored procedure (CREATE PROCEDURE) which becomes the API layer for the application and each one probably issues a commit just prior to returning control to the calling application. Its exception handler can transform the at-commit errors to application errors and then return control to the calling application - which then needs to handle a clean return or an application-level error return. David J.
Vertical partition
Hi , Can you pls tell us how to do a vertical partition in postgresql Thanks RamaKrishnan
Re: Vertical partition
On Friday, May 6, 2022, Rama Krishnan wrote: > Hi , > > Can you pls tell us how to do a vertical partition in postgresql > Manually. “Create table” with the columns you want in each. You FK column will also be your PK column on the non-primary table. David J.
Re: Vertical partition
Thanks a lot. Which means normal primary key and foreign key relationship right can u pls send me any reference link On Sat, 7 May, 2022, 06:21 David G. Johnston, wrote: > > > On Friday, May 6, 2022, Rama Krishnan wrote: > >> Hi , >> >> Can you pls tell us how to do a vertical partition in postgresql >> > > > Manually. “Create table” with the columns you want in each. You FK > column will also be your PK column on the non-primary table. > > David J. > >
Re: Vertical partition
On Friday, May 6, 2022, Rama Krishnan wrote: > Thanks a lot. Which means normal primary key and foreign key relationship > right can u pls send me any reference link > Like what? Read the documentation for those commands and features. David J.
How to get value wrapped in json?
A json object is like this - {3}. How to get the value -3 out of this json object {3} David
pg_dump: VACUUM and REINDEXING
Hello! I've some questions around Backup & Restore. 1: Is it necessary to perform a VACUUM and REINDEXING operation after restoring the dump from Postgres 9.6 to Postgres 13? The dump size could be 1/2 TB to 1 TB. 2: Also, are there any other operations that are recommended to perform after pg_restore? 3: What is the minimum required disk space if taking a dump on the same machine where the source database exists? Is it the "size of the current data folder x 2"? Thanks. Hasan
Re: "A transaction cannot be ended inside a block with exception handlers."
> david.g.johns...@gmail.com wrote: > >> b...@yugabyte.com wrote: >> >> Is there simply no way that inserts into table "t" in my example can be >> encapsulated in PL/pgSQL so that the error from the failing trigger can be >> handled rather there than escaping, raw, to the client? > > Any potential solution to this problem will involve writing a stored > procedure (CREATE PROCEDURE) which becomes the API layer for the application > and each one probably issues a commit just prior to returning control to the > calling application. Its exception handler can transform the at-commit > errors to application errors and then return control to the calling > application - which then needs to handle a clean return or an > application-level error return. Eh? A stored procedure that has a commit like you suggest cannot have an exception handler like you also suggest. That's what the doc says. I quoted the sentence in my "subject" line. More carefully stated, if you have such a procedure, then any txn control statement that it executes will cause this: ERROR: 2D000: cannot roll back while a subtransaction is active Sure, you can handle this. But that gets you nowhere. The procedure will always end this way and never do what you wanted it to do. Sorry if my email wasn't clear. I'll be delighted if somebody can show me working PL/pgSQL code that uses the setup that I showed and takes this as a starting point: create procedure do_insert(good in boolean) language plpgsql as $body$ begin case good when true then for j in 10..20 loop insert into t(k) values(j); end loop; when false then insert into t(k) values(42); end case; end; $body$; As presented, it ends like this when it's called with "false" ERROR: P0004: tigger trg caused exception CONTEXT: PL/pgSQL function trg_fn() line 9 at ASSERT LOCATION: exec_stmt_assert, pl_exec.c:3918 Modify it along the lines that David suggests so that when it's called with "true", it completes silently and makes the intended changes. And when it's called with "false", it reports that it handled the P0004 error via "raise info" and then returns without error. The rest (presumably with an OUT parameter) is easy.
Re: "A transaction cannot be ended inside a block with exception handlers."
Bryn Llewellyn writes: > I want to demonstrate how to meet this requirement: > « > Encapsulate each business function in a user-defined subprogram that hides > all the implementation details like table names and the SQL statements that > manipulate their contents so the they cannot be seen using SQL issued from > the client. Further, don't allow raw errors to escape to the client. Rather, > if an expected error occurs (like a unique key violation), then report this > as an application-specific code that translates to, for example, "This > nickname is already taken. Choose a different one." And if an "others" error > occurs (typically because the programmer forgot to cater for it—like a > too-wide varchar value) insert diagnostic info into an incident log table and > return an "unexpected error" application-specific code together with the > incident ID so that it can be reported to Support. > » > I've written proof-of-concept code that shows how to meet this requirement > for most scenarios. But it seems to be impossible to meet the requirement for > errors that occur at commit time. So ... avoid those? It seems like it's only a problem if you use deferred constraints, and that's not a necessary feature. > Is there simply no way that inserts into table "t" in my example can be > encapsulated in PL/pgSQL so that the error from the failing trigger can be > handled rather there than escaping, raw, to the client? If you want a strict view of that you probably need to be doing the encapsulation on the client side. There's nothing you can do on the server side that would prevent, say, network-connection failures "escaping" to the client. And that's actually one of the harder cases to deal with: if the connection drops just after you issue COMMIT, you can't tell whether the transaction got committed. regards, tom lane
Re: How to get value wrapped in json?
On Friday, May 6, 2022, Shaozhong SHI wrote: > A json object is like this - {3}. > > How to get the value -3 out of this json object {3} > > That isn’t valid json so your question doesn’t make sense. Just treat it as a text type and use one or more of the documented text functions to manipulate it. David J.
Re: "A transaction cannot be ended inside a block with exception handlers."
> t...@sss.pgh.pa.us wrote: > >> b...@yugabyte.com wrote: >> >> I want to demonstrate how to meet this requirement: >> >> « >> Encapsulate each business function in a user-defined subprogram that hides >> all the implementation details like table names and the SQL statements that >> manipulate their contents so the they cannot be seen using SQL issued from >> the client. Further, don't allow raw errors to escape to the client. Rather, >> if an expected error occurs (like a unique key violation), then report this >> as an application-specific code that translates to, for example, "This >> nickname is already taken. Choose a different one." And if an "others" error >> occurs (typically because the programmer forgot to cater for it—like a >> too-wide varchar value) insert diagnostic info into an incident log table >> and return an "unexpected error" application-specific code together with the >> incident ID so that it can be reported to Support. >> » >> >> I've written proof-of-concept code that shows how to meet this requirement >> for most scenarios. But it seems to be impossible to meet the requirement >> for errors that occur at commit time. > > So ... avoid those? It seems like it's only a problem if you use deferred > constraints, and that's not a necessary feature. > >> Is there simply no way that inserts into table "t" in my example can be >> encapsulated in PL/pgSQL so that the error from the failing trigger can be >> handled rather there than escaping, raw, to the client? > > If you want a strict view of that you probably need to be doing the > encapsulation on the client side. There's nothing you can do on the server > side that would prevent, say, network-connection failures "escaping" to the > client. And that's actually one of the harder cases to deal with: if the > connection drops just after you issue COMMIT, you can't tell whether the > transaction got committed. I'll take this to mean that there is no plan for PG ever to allow txn control in a PL/pgSQL block that has an exception handler. Please tell me if I misunderstood. Your point about a certain class of server side error is well taken. (In Oracle Database, at least, errors like that cannot be handled in an exception block. They inevitably escape to the client. Is it the same in PG? But I'm not convinced by this "what-about-ism" argument that it's pointless to handle those errors that allow it so that hackers get only the bare minimum information on things like schema-object names and the like. The more that hackers know about a system's internals, the better are their chances of doing evil. You said "it's only a problem if you use deferred constraints, and that's not a necessary feature". My example was contrived. But some requirements (like entity level constraints) require commit-time checking. Like, say, a department must have just one or two staff whose job is 'Admin'. (This is presumably why the feature exists.) Using the serializable isolation level is another possible approach. But doing that can also lead to commit-time errors. Is there really no sympathy for what I want to achieve?
Re: "A transaction cannot be ended inside a block with exception handlers."
I do understand better now and indeed the current limitation has no workaround that I can come up with. I was hoping maybe subblocks would work but its pretty clear cut that to catch an error at the commit command you must catch it within a block and the commit error will be raised first. On Fri, May 6, 2022 at 9:23 PM Bryn Llewellyn wrote: > > I'll take this to mean that there is no plan for PG ever to allow txn > control in a PL/pgSQL block that has an exception handler. Please tell me > if I misunderstood. > You misunderstand how the development of PostgreSQL works generally. But, I suppose as a practical matter if you aren't going to spearhead the change you might as well assume it will not be possible until it is. > Is there really no sympathy for what I want to achieve? > I personally have sympathy, and if you submitted a patch to improve matters here I don't see anyone saying that it would be unwanted. As for the circular dependency breaking use of deferred constraints (or your explicitly deferred triggers), you have the SET CONSTRAINTS ALL IMMEDIATE command: postgres=# call do_insert(false); ERROR: tigger trg caused exception CONTEXT: PL/pgSQL function trg_fn() line 9 at ASSERT SQL statement "SET CONSTRAINTS ALL IMMEDIATE" PL/pgSQL function do_insert(boolean) line 12 at SQL statement postgres=# create or replace procedure do_insert(good in boolean) language plpgsql as $body$ begin begin case good when true then for j in 10..20 loop insert into t(k) values(j); end loop; when false then insert into t(k) values(42); end case; SET CONSTRAINTS ALL IMMEDIATE; commit; end; exception when invalid_transaction_termination then raise exception 'caught invalid'; when OTHERS then raise notice 'others - ok'; commit; end; David J.