On Mon, 2023-02-27 at 07:26 +, PALAYRET Jacques wrote:
> # An interval in " years months ... seconds " given in seconds by
> EXTRACT(EPOCH ...) transtyped into INTERVAL :
> SELECT (EXTRACT(EPOCH FROM ('3 years 2 months 1 day 10 hours 11 minutes 12
> seconds'::interval) ) || ' seconds')::inter
Does PostgreSQL take into account daylight saving time in its calendar?
For the last summer hour of the spring (Daylight Saving Time), on Sunday March
27, 2022:
SELECT to_timestamp('20220329 00:00:00','mmdd hh24:mi:ss') -
to_timestamp('20220320 00:00:00','mmdd hh24:mi:ss') intervalle ;
Laurenz Albe, you are right, thank you; actually, it depends (of course) on the
time zone:
# With GMT (no Daylight Saving Time):
SHOW timezone ;
TimeZone
--
GMT
SELECT timestamp with time zone '2022-03-29 12:00:00' - timestamp with time
zone '2022-03-26 12:00:00' ;
?column?
--
Good morning,
I am new to Postgresql. I have 50 tables into a "ign" schema (schema other
than public).
I would like for each of these 50 tables:
- Add a prefix to the name of the table: "IGN_bdTopo_"
- Add a suffix to the table name: "_V1"
- create a new "date" column of date type. And populate th
On Mon, 2023-02-27 at 06:28 +, Jan Bilek wrote:
> Our customer was able to sneak in an Unicode data into a column of a JSON
> Type and now that record fails on select.
> Would you be able to suggest any way out of this? E.g. finding infringing
> row, updating its data ... ?
I'd be curious to
On Mon, 2023-02-27 at 03:52 +, Neethu P wrote:
> Is it possible to access the pg_ddl_command using C function? In the shared
> links,
> I couldn't find one. Can you please share an example for the same?
No. You could hire a professional.
Yours,
Laurenz Albe
On 2/27/23 05:53, celati Laurent wrote:
Good morning,
I am new to Postgresql. I have 50 tables into a "ign" schema (schema other
than public).
I would like for each of these 50 tables:
- Add a prefix to the name of the table: "IGN_bdTopo_"
- Add a suffix to the table name: "_V1"
- create a ne
> On 27/02/2023 13:13 CET Laurenz Albe wrote:
>
> I'd be curious to know how the customer managed to do that.
> Perhaps there is a loophole in PostgreSQL that needs to be fixed.
Probably via some data access layer and not directly via Postgres. It's easy
to reproduce with psycopg:
impor
Hello, I tried upgrading pg db from version 9.6 to 14 by using
pg_upgradecluster command. I freshly installed pg 14 -> ran pg_dropcluster
14 main --stop -> and then upgraded using pg_upgradecluster 9.6 main.
After a successful prompt finished, I checked the database and the size
went from originall
Erik Wienhold writes:
>> On 27/02/2023 13:13 CET Laurenz Albe wrote:
>> I'd be curious to know how the customer managed to do that.
>> Perhaps there is a loophole in PostgreSQL that needs to be fixed.
> Another reason to prefer jsonb over json to reject such inputs right away.
> The documentatio
On 2/27/23 07:44, Dávid Suchan wrote:
Hello, I tried upgrading pg db from version 9.6 to 14 by using
pg_upgradecluster command. I freshly installed pg 14 -> ran
pg_dropcluster 14 main --stop -> and then upgraded using
pg_upgradecluster 9.6 main.
After a successful prompt finished, I checked the
Adrian Klaver writes:
> On 2/27/23 07:44, Dávid Suchan wrote:
>> After a successful prompt finished, I checked the database and the size
>> went from originally 20gb (in 9.6) to 700~ mb (in 14) while the disk
>> space available shrank by about 2gb meaning that there is still the 20gb
>> of data
On 2/27/23 08:36, Tom Lane wrote:
Adrian Klaver writes:
On 2/27/23 07:44, Dávid Suchan wrote:
After a successful prompt finished, I checked the database and the size
went from originally 20gb (in 9.6) to 700~ mb (in 14) while the disk
space available shrank by about 2gb meaning that there is s
Adrian Klaver writes:
> On 2/27/23 08:36, Tom Lane wrote:
>> If it was based on something like "du", perhaps the measurement
>> was fooled by the fact that most of the data files will be hard-linked
>> between the old and new clusters.
> Does that happen without the --link option?
No, but the OP
On 2/27/23 08:48, Tom Lane wrote:
Adrian Klaver writes:
On 2/27/23 08:36, Tom Lane wrote:
If it was based on something like "du", perhaps the measurement
was fooled by the fact that most of the data files will be hard-linked
between the old and new clusters.
Does that happen without the --l
On 2/27/23 08:49, Dávid Suchan wrote:
Reply to list
Ccing list for real this time.
1) I used \l+ in psql and then counted rows- millions were missing
\l lists databases.
Are you saying there are millions of database?
Otherwise what rows where you counting?
3) nothing at all, everything wa
On 2/27/23 09:05, Dávid Suchan wrote:
Please use Reply All
Ccing list
My bad,
\l+ lists databases and their respective sizes- I used that and also
pg_size_pretty(), the result size was the same - before it was 20gb for
the biggest db, after it was 700mb.
I counted rows before the upgrade in o
I did not use the -k --link argument while upgrading as that I presume does
not copy the data
Dňa po 27. 2. 2023, 18:10 Adrian Klaver
napísal(a):
> On 2/27/23 09:05, Dávid Suchan wrote:
>
> Please use Reply All
> Ccing list
>
> > My bad,
> > \l+ lists databases and their respective sizes- I used
On Mon, 2023-02-27 at 18:18 +0100, Dávid Suchan wrote:
> I did not use the -k --link argument while upgrading as that I presume does
> not copy the data
It would be great if you shared the exact command line you used.
The man page of "pg_upgradecluster" says:
-m, --method=dump|upgrade|link|cl
On 2/27/23 09:10, Adrian Klaver wrote:
On 2/27/23 09:05, Dávid Suchan wrote:
Please use Reply All
Ccing list
My bad,
\l+ lists databases and their respective sizes- I used that and also
pg_size_pretty(), the result size was the same - before it was 20gb
for the biggest db, after it was 700mb
I used "pg_upgradecluster 9.6 main", all commands are in my first message:
I freshly installed pg 14 -> ran pg_dropcluster 14 main --stop -> and then
upgraded using pg_upgradecluster 9.6 main.
po 27. 2. 2023 o 18:40 Laurenz Albe napísal(a):
> On Mon, 2023-02-27 at 18:18 +0100, Dávid Suchan wrote
1) i downloaded both versions using apt-get install postgres
2) i will check the tablespace and log files tomorrow, i don't have access
to the workstation right now.
po 27. 2. 2023 o 18:44 Adrian Klaver napísal(a):
> On 2/27/23 09:10, Adrian Klaver wrote:
> > On 2/27/23 09:05, Dávid Suchan wrote
HI Team,
This is the Function I have created successfully but while executing it
throughs an error temp table doesn't exist.
But the same when I execute it not inside the function from **drop temp
table to end insert select ** it work fine
Please can any one help me why in the function i am not
Is there any direct way in Postgresql to get rid of the frankly anti-useful
junk at the end of each line (which also infects pg_stat_activity.query), or
must I resort to sed post-processing?
test=# select pg_get_functiondef(oid)
test-# from pg_proc
test-# where proname = 'foo';
On 2/27/23 11:34 AM, Ron wrote:
Is there any direct way in Postgresql to get rid of the frankly
anti-useful junk at the end of each line (which also infects
pg_stat_activity.query), or must I resort to sed post-processing?
\pset format unaligned
test=# select pg_get_functiondef(oid)
test-#
On 2/27/23 11:34 AM, Ron wrote:
Is there any direct way in Postgresql to get rid of the frankly
anti-useful junk at the end of each line (which also infects
pg_stat_activity.query), or must I resort to sed post-processing?
Or
\ef foo
test=# select pg_get_functiondef(oid)
test-# from pg_p
On 2/27/23 11:10 AM, nikhil raj wrote:
HI Team,
This is the Function I have created successfully but while executing it
throughs an error temp table doesn't exist.
But the same when I execute it not inside the function from **drop temp
table to end insert select ** it work fine
Please can
On 2/27/23 11:52 AM, nikhil raj wrote:
Reply to list also
Ccing list.
HI Adrian,
Yes, I have tried it through the same error.
Orion_db=> select api."post_publish_Roster"()
Orion_db -> ;
ERROR: relation "roster_table" does not exist
LINE 94: ... interval '1 day')::date as d_date F
nikhil raj writes:
> This is the Function I have created successfully but while executing it
> throughs an error temp table doesn't exist.
You won't be able to do it like that in a SQL-language function, because
the whole function body is parsed and parse-analyzed in one go. So the
later query r
Hi Tom,
The same query is executed outside the function its working properly
means just the qurey from drop temp table to insert but when i keep the
query inside a function its through me the error an after temp ra how
creation only i am using the select statment still its not exectuing the
creat
On Mon, Feb 27, 2023 at 1:17 PM nikhil raj wrote:
>
> The same query is executed outside the function its working properly
> means just the qurey from drop temp table to insert but when i keep the
> query inside a function its through me the error an after temp ra how
> creation only i am using
On Mon, Feb 27, 2023 at 2:40 PM Adrian Klaver
wrote:
> On 2/27/23 11:34 AM, Ron wrote:
> > Is there any direct way in Postgresql to get rid of the frankly
> > anti-useful junk at the end of each line (which also infects
> > pg_stat_activity.query), or must I resort to sed post-processing?
> >
> >
Hi,
I’ve got a… rather large query (see below), in which I join a complex
data structure (whose exact contents do not, at this point, matter)
together to get some auxiliary data to expose as JSON field.
In this query I can use, for example…
jsonb_build_object('user_permissions',
jsonb_agg(DI
mirabilos writes:
> This works well. However, what I seem to be not allowed to do is
> (without the extra COALESCE, to simplify):
> ...
> This is because, when I use DISTINCT (but only then‽), the ORDER BY
> arguments must be… arguments to the function, or something.
Well, yeah. Simplify it to
On Mon, Feb 27, 2023 at 4:11 PM mirabilos wrote:
>
> jsonb_build_object('opening_times',
> jsonb_agg(DISTINCT jsonb_build_object(
> 'weekday', cot.weekday,
> 'from_hour', cot.from_hour,
> 'to_hour', cot.to_hour)
> ORDER BY cot.weekday, cot.from_hour, cot.to_hour)
>
On Mon, Feb 27, 2023 at 5:22 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
>
> Lastly, if you do need to care about normalizing the output of JSON you
> should consider writing a function that takes arbitrary json input and
> reformats it, rather than trying to build up json from scrat
On 2/27/23 22:13, Laurenz Albe wrote:
On Mon, 2023-02-27 at 06:28 +, Jan Bilek wrote:
Our customer was able to sneak in an Unicode data into a column of a JSON Type
and now that record fails on select.
Would you be able to suggest any way out of this? E.g. finding infringing row,
updating
On 2/28/23 01:17, Erik Wienhold wrote:
>> On 27/02/2023 13:13 CET Laurenz Albe wrote:
>>
>> I'd be curious to know how the customer managed to do that.
>> Perhaps there is a loophole in PostgreSQL that needs to be fixed.
> Probably via some data access layer and not directly via Postgres. It's ea
On Mon, 27 Feb 2023, Tom Lane wrote:
>Well, yeah. Simplify it to
>
> SELECT array_agg(DISTINCT x ORDER BY y) FROM mytable;
That’s… a bit too simple for this case.
>If there are several rows containing the same value of x and different
>values of y, which y value are we supposed to sort the uni
Thorsten Glaser writes:
> On Mon, 27 Feb 2023, Tom Lane wrote:
>> Well, yeah. Simplify it to
>> SELECT array_agg(DISTINCT x ORDER BY y) FROM mytable;
> That’s… a bit too simple for this case.
Sure, I was just trying to explain the rule.
>> For the specific example you give, it's true that any
On Mon, Feb 27, 2023 at 6:22 PM Thorsten Glaser wrote:
>
> Not about the DISTINCTs. I haven’t used JOIN much (usually WHERE
> as inner join) nor on tables this massive, and this is my second
> foray into aggregate functions only.
>
>
Fair.
Consider this then as a jumping point to a more precise
On Mon, 27 Feb 2023, David G. Johnston wrote:
>Consider this then as a jumping point to a more precise query form:
[…]
>the basic concept holds - produce single rows in subqueries then join those
>various single rows together to produce your desired json output.
Ouch. I’ll have to read up and exp
42 matches
Mail list logo