Re: DISTINCT *and* ORDER BY in aggregate functions on expressions(!)y
> On 28 Feb 2023, at 3:54, Thorsten Glaser wrote: (…) >> Well, that may be what you want, but it's not what you wrote in >> the query. Follow David's advice and do > […] >> I'm pretty sure that this will only incur one evaluation of the >> common subexpression, so even though it's tedious to type it's not >> inefficient. > > Thanks. But I fear it’s not as simple as you wrote. More like: > > jsonb_build_object('opening_times', > jsonb_agg(DISTINCT jsonb_build_object( > 'weekday', cot.weekday, > 'from_hour', cot.from_hour, > 'to_hour', cot.to_hour) > ORDER BY > jsonb_build_object( > 'weekday', cot.weekday, > 'from_hour', cot.from_hour, > 'to_hour', cot.to_hour)->>'weekday', > jsonb_build_object( > 'weekday', cot.weekday, > 'from_hour', cot.from_hour, > 'to_hour', cot.to_hour)->>'from_hour', > jsonb_build_object( > 'weekday', cot.weekday, > 'from_hour', cot.from_hour, > 'to_hour', cot.to_hour)->>'to_hour') > ) > > Isn’t that more like it? Perhaps you can use a lateral cross join to get the result of jsonb_build_object as a jsonb value to pass around? Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest.
Re: Tempory table is not getting created inside Function in postgres.
HI All, This for the help plpgsql and rest everything just move under declare and now everything works fine. Thanks for the help. On Tue, Feb 28, 2023 at 2:10 AM David G. Johnston < david.g.johns...@gmail.com> wrote: > On Mon, Feb 27, 2023 at 1:17 PM nikhil raj wrote: > >> >> The same query is executed outside the function its working properly >> means just the qurey from drop temp table to insert but when i keep the >> query inside a function its through me the error an after temp ra how >> creation only i am using the select statment still its not exectuing the >> create statment or not storing the temp table. >> >>> >>> > What Tom said is that in the following change 'sql' to plpgsql and move > on. Your function is not capable of being executed in an SQL language > function. > > CREATE OR REPLACE FUNCTION api."post_publish_Roster"( > ) > RETURNS void > LANGUAGE 'sql' > COST 100 > VOLATILE PARALLEL UNSAFE > AS $BODY$ > > David J. > >
13.x, stream replication and locale(?) issues
Hello. I have a 13.4 pgsql instance on Linux which has a bunch of databases with UTF-8/ru_RU.utf8 encoding/collation set. I've stream replicated it to the 13.10 instance on FreeBSD (may be this is the part where it all has gone wrong way, but at this moment I believe streaming replication should work since both run one major version). And the funny things started to happen. First, the instance said sql: error: FATAL: database locale is incompatible with operating system DETAIL: The database was initialized with LC_COLLATE "ru_RU.utf8", which is not recognized by setlocale(). HINT: Recreate the database with another locale or install the missing locale. I've decided to go the easy way and just symlinked the ru_RU.UTF-8 locale to ru_RU.utf8, because from my understanding it's the same locale, just cased differently (seems like I was totally wrong). The database was running fine for quite some time and then I've got tonnes of complains about weird SQL queries behavior. I'll just illustrate the difference: *Master instance:* tpark-rbac=# select id, service_name from micro_service where service_name='profile'; id | service_name +-- 17 | profile (1 row) tpark-rbac=# select operation_id, micro_service_id from micro_service_operation where operation_id='admin.member-request.list'; operation_id | micro_service_id ---+-- admin.member-request.list | 17 (1 row) tpark-rbac=# SELECT ms.id FROM micro_service_operation mso, micro_service ms where mso.micro_service_id=ms.id and ms.service_name='profile' AND mso.operation_id='admin.member-request.list'; id 17 (1 row) *Standby instance:* tpark-rbac=# select id, service_name from micro_service where service_name='profile'; id | service_name +-- 17 | profile (1 row) tpark-rbac=# select operation_id, micro_service_id from micro_service_operation where operation_id='admin.member-request.list'; operation_id | micro_service_id ---+-- admin.member-request.list | 17 (1 row) tpark-rbac=# SELECT ms.id FROM micro_service_operation mso, micro_service ms where mso.micro_service_id=ms.id and ms.service_name='profile' AND mso.operation_id='admin.member-request.list'; id (0 rows) The thing is, as it seems, that the value "admin.member-request.list" isn't quite "admin.member-request.list" on a standby: tpark-rbac=# SELECT ms.id, mso.operation_id, ms.service_name, length(mso.operation_id) as msolength FROM micro_service_operation mso, micro_service ms where mso.micro_service_id=ms.id and ms.service_name='profile' and mso.operation_id like 'admin.member-request.list%'; id | operation_id | service_name | msolength +---+--+--- 17 | admin.member-request.list | profile | 25 (1 row) tpark-rbac=# SELECT ms.id, mso.operation_id, ms.service_name, length(mso.operation_id) as msolength FROM micro_service_operation mso, micro_service ms where mso.micro_service_id=ms.id and ms.service_name='profile' and mso.operation_id like 'admin.member-request.list'; id | operation_id | service_name | msolength +--+--+--- (0 rows) And I suppose this is because of the locale hack. Now a bunch of stupid questions: 1) why the utf-8 locales behave differently when working with what appears to be clearly latin1 characters ? From my understanding latin1 characters shouldn't be affected at all. 2) why does the query where I just ask for equality of the value to the "admin.member-request.list" work perfectly when FROM clause contains one table, but fails as soon as FROM starts to contain multiple tables ? 3) how do I fix it ? Should I take locale sources for ru_RU.utf8 on Linux and compile it on FreeBSD - will it help ? 4) the most disappointing thing is that I have long-term relationship with pgsql replication and to this day I was able to do any kinds of juggling - replicating from Linux to Solaris, from Solaris to FreeBSD and vice-versa, all possible combinations as long as UTF-8 was the encoding - what changed now ? 5) will the downgrading to 13.4 on the standby help me ? Thanks. Eugene.
Quit currently running query
How can I quit a currently running query? I've issued a query and my server does not respond anymore. Is there another solution than using kill -9?
Re: Quit currently running query
2023年2月28日(火) 22:30 Albert Cornelius : > > How can I quit a currently running query? I've issued a query and my server > does not respond anymore. Is there another solution than using kill -9? See here: https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL Regards Ian Barwick
Re: Quit currently running query
On 2/28/23 04:53, Albert Cornelius wrote: How can I quit a currently running query? I've issued a query and my server does not respond anymore. Is there another solution than using kill -9? What *specifically* do you mean by "my server does not respond anymore"? Because if "the server" /really/ doesn't respond anymore, you can't even issue "kill -9". I'd first try "|select pg_cancel_backend();". If that doesn't work, try |"|select pg_*terminate*_backend();"|||, which is the Postgresql-aproved version of "kill -9". -- Born in Arizona, moved to Babylonia.
Re: 13.x, stream replication and locale(?) issues
"Eugene M. Zheganin" writes: > I have a 13.4 pgsql instance on Linux which has a bunch of databases > with UTF-8/ru_RU.utf8 encoding/collation set. > I've stream replicated it to the 13.10 instance on FreeBSD (may be this > is the part where it all has gone wrong way, but at this moment I > believe streaming replication should work since both run one major > version). And the funny things started to happen. Sadly, ru_RU.utf8 on Linux and ru_RU.utf8 on FreeBSD are almost certainly *not* exactly compatible. You could probably ignore the differences if you were using logical replication, but with physical replication any difference in sort order is going to mean that indexes on text columns appear corrupt on the standby. Which is exactly what your troubles sound like. You could verify this theory by seeing whether contrib/amcheck reports any ordering problems in the indexes of the troublesome tables. (You'll probably have to promote the standby to primary in order to install the amcheck extension, but you need to treat that installation as hosed anyway ...) There's more info about this general class of problems at https://wiki.postgresql.org/wiki/Locale_data_changes although that focuses on the even-more-annoying case where locale sort order changes between releases of a single OS. regards, tom lane
Re: Quit currently running query
On 2/28/23 03:53, Albert Cornelius wrote: How can I quit a currently running query? I've issued a query and my server does not respond anymore. Is there another solution than using kill -9? can you open another psql session to your server?
Move all elements toward another schema?
Good afternoon, With postgresql 13, I want to find a way to move 100 tables from schema 'A' to schema 'B'. Not just data. But also indexes, primary keys, constraints (INCLUDING ALL). As far as i know, this piece of code would move the data. But how to also move indexes, constraints, primary key? DO $$ DECLARED row record; BEGIN FOR row IN SELECT tablename FROM pg_tables WHERE schemaname = 'A' -- and other conditions, if needed LOOPS EXECUTE format('ALTER TABLE A.%I SET SCHEMA [B];', row.tablename); END LOOP; END; $$; Thanks so much.
Re: Move all elements toward another schema?
On Tue, Feb 28, 2023 at 9:37 AM celati Laurent wrote: > Good afternoon, > > With postgresql 13, I want to find a way to move 100 tables from schema > 'A' to schema 'B'. Not just data. But also indexes, primary keys, > constraints (INCLUDING ALL). > As far as i know, this piece of code would move the data. But how to also > move indexes, constraints, primary key? > > > DO > $$ > DECLARED > row record; > BEGIN > FOR row IN SELECT tablename FROM pg_tables WHERE schemaname = 'A' -- > and other conditions, if needed > LOOPS > EXECUTE format('ALTER TABLE A.%I SET SCHEMA [B];', row.tablename); > END LOOP; > END; > $$; > > Run the code in some test environment and see exactly what it does instead of guessing. In any case, at least for constraints it isn't like they have an existence beyond the table upon which they are defined, there is no CREATE/ALTER CONSTRAINT command; moving a table necessarily moves anything that is inherently a part of that table. There is an ALTER INDEX command although since it lacks a "SCHEMA" instruction I would assume the indexes, which are indeed a fundamental part of the table, would be moved along with the table. Experiment if you want a more concrete answer. But it doesn't make sense to have these things be in different schemas than the tables they modify so it all makes sense from a design standpoint. David J.
Re: Move all elements toward another schema?
> > As far as i know, this piece of code would move the data. But how to also > move indexes, constraints, primary key? > > create schema if not exists a; create schema if not exists b; create table a.a(id integer not null constraint pk_a primary key, name text); create index idx_a_name on a.a(name); alter table a.a set schema b; select relnamespace::regnamespace, relname from pg_class where relname in ('a','pk_a','idx_a_name'); relnamespace | relname --+ b| a b| idx_a_name b| pk_a (3 rows)
Re: Quit currently running query
On 2023-02-28 07:42:08 -0600, Ron wrote: > On 2/28/23 04:53, Albert Cornelius wrote: > > How can I quit a currently running query? I've issued a query and > my server does not respond anymore. Is there another solution than > using kill -9? > > > What specifically do you mean by "my server does not respond > anymore"? Because if "the server" really doesn't respond anymore, you > can't even issue "kill -9". The term "server" can mean different things: 1) A role in an interaction between two programs. The "client" is the one which issues requests, and the "server" is the one which fullfills them. 2) A program which is intended for the server role. Note that a program can be a server for one protocol and a client for another (e.g., a database server can be a DNS or LDAP client). (And some programs are even server and client for the same protocol) 3) A machine intended for running server programs. You are thinking of the 3rd meaning. My guess is that Albert meant the first. hp -- _ | Peter J. Holzer| Story must make more sense than reality. |_|_) || | | | h...@hjp.at |-- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!" signature.asc Description: PGP signature
Re: Quit currently running query
On 2/28/23 11:31, Peter J. Holzer wrote: On 2023-02-28 07:42:08 -0600, Ron wrote: On 2/28/23 04:53, Albert Cornelius wrote: How can I quit a currently running query? I've issued a query and my server does not respond anymore. Is there another solution than using kill -9? What specifically do you mean by "my server does not respond anymore"? Because if "the server" really doesn't respond anymore, you can't even issue "kill -9". The term "server" can mean different things: 1) A role in an interaction between two programs. The "client" is the one which issues requests, and the "server" is the one which fullfills them. 2) A program which is intended for the server role. Note that a program can be a server for one protocol and a client for another (e.g., a database server can be a DNS or LDAP client). (And some programs are even server and client for the same protocol) 3) A machine intended for running server programs. You are thinking of the 3rd meaning. My guess is that Albert meant the first. And maybe he's referring to his client software... That's why I asked him for clarification. -- Born in Arizona, moved to Babylonia.
Re: 13.x, stream replication and locale(?) issues
On Wed, Mar 1, 2023 at 12:09 AM Eugene M. Zheganin wrote: > 3) how do I fix it ? Should I take locale sources for ru_RU.utf8 on Linux and > compile it on FreeBSD - will it help ? Out of curiosity (I'm not saying it's a good idea!), do you know if FreeBSD's localedef can compile glibc's collation definitions? In theory they are in a format standardised by POSIX... I suspect there may be extensions and quirks... At a wild guess, since the data you showed doesn't even look like it contains non-ASCII characters (it looks like machine readable identifiers or something, and perhaps its the sort order of '-' that is causing you trouble), so it might also be possible to use "ucs_basic" locale for that column and then all computers will agree on the sort order, but of course that doesn't address the more general problem; presumably you might also have Russian language text in your system too. As for ".utf8" vs ".UTF-8", which one is selected by initdb as the database default seems to be something that varies between Linux distributions, so I guess maybe the installers use different techniques for discovering and selecting default locale names. Unlike glibc, FreeBSD doesn't do any name mangling at all when mapping LC_COLLATE to a pathname to find the file, whereas glibc downcases and removes '-' so you can find both formats of name in the various places...
Re: Interval in hours but not in days Leap second not taken into account
On Mon, Feb 27, 2023 at 8:26 PM PALAYRET Jacques wrote: > # PostgreSQL does not take into account the additional second (leap second) > in some calendar days ; eg. 2016, 31 dec. : > SELECT to_timestamp('20170102 10:11:12','mmdd hh24:mi:ss') - > to_timestamp('20161230 00:00:00','mmdd hh24:mi:ss') intervalle ; >intervalle > - > 3 days 10:11:12 Bonjour Jacques, Just for fun: postgres=# SELECT utc_to_tai(to_timestamp('20170102 10:11:12','mmdd hh24:mi:ss')) - utc_to_tai(to_timestamp('20161230 00:00:00','mmdd hh24:mi:ss')) intervalle; intervalle - 3 days 10:11:13 (1 row) PostgreSQL could, in theory, provide built-in UTC/TAI conversions functions using a leap second table that would be updated in each minor release, considering that the leap second table is included in the tzdata package that PostgreSQL vendors (ie includes a copy of), but it doesn't do anything like that or know anything about leap seconds. Here's a quick and dirty low technology version of the above: CREATE TABLE leap_seconds (time timestamptz primary key, off int); -- refresh leap second table from ietf.org using not-very-secure hairy shell code BEGIN; CREATE TEMP TABLE import_leap_seconds (s int8, off int); COPY import_leap_seconds FROM PROGRAM 'curl -s https://www.ietf.org/timezones/data/leap-seconds.list | grep -v ''^#'' | cut -f1,2'; TRUNCATE TABLE leap_seconds; INSERT INTO leap_seconds (time, off) SELECT '1900-01-01 00:00:00Z'::timestamptz + interval '1 second' * s, off FROM import_leap_seconds; DROP TABLE import_leap_seconds; COMMIT; CREATE OR REPLACE FUNCTION leap_seconds_before_utc_time(t timestamptz) RETURNS int STRICT LANGUAGE SQL AS $$ SELECT off FROM leap_seconds WHERE time <= t ORDER BY time DESC FETCH FIRST ROW ONLY $$; CREATE OR REPLACE FUNCTION utc_to_tai(t timestamptz) RETURNS timestamptz STRICT LANGUAGE SQL AS $$ SELECT t + interval '1 second' * coalesce(leap_seconds_before_utc_time(t), 0); $$; CREATE OR REPLACE FUNCTION tai_to_utc(t timestamptz) RETURNS timestamptz STRICT LANGUAGE SQL AS $$ SELECT t - interval '1 second' * coalesce(leap_seconds_before_utc_time(t), 0); $$;