Content of pg_publication using a local connection versus network connection?

2025-01-27 Thread Shaheed Haque
Hi,

I'm a novice-ish when it comes to Postgres, but I've studied the docs and
not been able to understand why I can see the rows in pg_publication via a
local psql session, but not when I am connected via the network.

Since the network login is (a) successful and (b) can read the content of
other non-system tables, I guessed that my problem is row-level security
(RLS)except that from the docs, I was unable to see how the login type
could affect RLS. What am I missing?

Here is some context...please do ask if something else needs to be
clarified!

- System Postgres 16, AWS RDS version.
- The pg_publication tabe looks like this:

foo=>  \dpS pg_publication
  Access privileges
  Schema   |  Name  | Type  | Access privileges | Column
privileges | Policies
++---+---+---+--

pg_catalog | pg_publication | table | rdsadmin=arwdDxt/rdsadmin+|
  |
   ||   | =r/rdsadmin   |
  |


- When I am logged in as this user via psql, I  can see:

foo=> select * from pg_publication;
 oid  |  pubname  | pubowner | puballtables | pubinsert | pubupdate
| pubdelete | pubtruncate | pubviaroot
---+---+--+--+---+---+---+-+

98923 | vm_db_publication |16478 | t| t | t
| t | t   | f


- When I connect via psycog, I can read other tables, but pg_publication
aways seems to return no rows.

Any assistance would be appreciated.

Thanks, Shaheed


Re: Content of pg_publication using a local connection versus network connection?

2025-01-27 Thread Adrian Klaver

On 1/27/25 12:41, Shaheed Haque wrote:

Hi,

I'm a novice-ish when it comes to Postgres, but I've studied the docs 
and not been able to understand why I can see the rows in pg_publication 
via a local psql session, but not when I am connected via the network.


Since the network login is (a) successful and (b) can read the content 
of other non-system tables, I guessed that my problem is row-level 
security (RLS)except that from the docs, I was unable to see how the 
login type could affect RLS. What am I missing?


Here is some context...please do ask if something else needs to be 
clarified!


- System Postgres 16, AWS RDS version.
- The pg_publication tabe looks like this:

foo=>  \dpS pg_publication
   Access privileges
   Schema   |  Name  | Type  | Access privileges | 
Column privileges | Policies

++---+---+---+--
pg_catalog | pg_publication | table | rdsadmin=arwdDxt/rdsadmin+| 
   |
    |    |   | =r/rdsadmin   | 
   |



- When I am logged in as this user via psql, I  can see:


This user is rdsadmin or something else?



foo=> select * from pg_publication;
  oid  |  pubname  | pubowner | puballtables | pubinsert | 
pubupdate | pubdelete | pubtruncate | pubviaroot

---+---+--+--+---+---+---+-+
98923 | vm_db_publication |    16478 | t    | t | t 
 | t | t   | f



- When I connect via psycog, I can read other tables, but pg_publication 
aways seems to return no rows.


1) What is your connection string?
   In particular what user are you connecting as?

2) Are you sure you are connecting to same database?




Any assistance would be appreciated.

Thanks, Shaheed





--
Adrian Klaver
adrian.kla...@aklaver.com





Issues with EPOCH-s, TIMESTAMP(TZ)-s and leap seconds.

2025-01-27 Thread Nem Tudom




Hi all,


I'm having trouble understanding matters related to TIMESTAMP(TZ)-s and 
leap seconds - my machine runs on UTC so as to remove any issues related 
to the zones.


From here: https://en.wikipedia.org/wiki/Leap_second,

There have been 27 leap seconds added to UTC since 1972.


But, when I run this fiddle (see bottom of this email link)

https://dbfiddle.uk/wxvmzfJb

(first snippet - 2015 -> 2016) I get a "nice" even number for the EPOCH 
of, 00:00:00 2016 , say (= 1451606400) - now, with 27 leap seconds since 
1972, I would expect that number to be (something like) 1451606427?


I thought that the EPOCH was the number of seconds since 1970-01-01 
00:00:00? Is this incorrect?


Also, (first snippet again), why is the TIMESTAMPTZ 23:59:60 2015 even 
allowed?


Now, we come to the second snippet (2016 -> 2017), I get *_exactly_* the 
same behaviour!


I was expecting to see that '2016-12-31 23:59:60'::TIMESTAMPTZ would 
work (leap second) and then that '2017-01-01 00:00:00'::TIMESTAMPTZ 
would have incremented by 1 second?


I'm puzzled. Does PostgreSQL take leap seconds into account? Does anyone?

Any help, advice, recommendations, URL-s, references &c. appreciated.


E...







Re: Issues with EPOCH-s, TIMESTAMP(TZ)-s and leap seconds.

2025-01-27 Thread Adrian Klaver

On 1/27/25 13:01, Nem Tudom wrote:



Hi all,


I'm having trouble understanding matters related to TIMESTAMP(TZ)-s and 
leap seconds - my machine runs on UTC so as to remove any issues related 
to the zones.


 From here: https://en.wikipedia.org/wiki/Leap_second,

There have been 27 leap seconds added to UTC since 1972.


But, when I run this fiddle (see bottom of this email link)

https://dbfiddle.uk/wxvmzfJb

(first snippet - 2015 -> 2016) I get a "nice" even number for the EPOCH 
of, 00:00:00 2016 , say (= 1451606400) - now, with 27 leap seconds since 
1972, I would expect that number to be (something like) 1451606427?


I thought that the EPOCH was the number of seconds since 1970-01-01 
00:00:00? Is this incorrect?


Also, (first snippet again), why is the TIMESTAMPTZ 23:59:60 2015 even 
allowed?


Now, we come to the second snippet (2016 -> 2017), I get *_exactly_* the 
same behaviour!


I was expecting to see that '2016-12-31 23:59:60'::TIMESTAMPTZ would 
work (leap second) and then that '2017-01-01 00:00:00'::TIMESTAMPTZ 
would have incremented by 1 second?


I'm puzzled. Does PostgreSQL take leap seconds into account? Does anyone?

Any help, advice, recommendations, URL-s, references &c. appreciated.


https://www.postgresql.org/docs/current/functions-datetime.html

"timezone

The time zone offset from UTC, measured in seconds. Positive values 
correspond to time zones east of UTC, negative values to zones west of 
UTC. (Technically, PostgreSQL does not use UTC because leap seconds are 
not handled.)

"

https://www.postgresql.org/docs/current/view-pg-timezone-names.html

" (Technically, PostgreSQL does not use UTC because leap seconds are not 
handled.)"



E...







--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Issues with EPOCH-s, TIMESTAMP(TZ)-s and leap seconds.

2025-01-27 Thread Peter J. Holzer
On 2025-01-27 21:01:59 +, Nem Tudom wrote:
> I'm having trouble understanding matters related to TIMESTAMP(TZ)-s and leap
> seconds - my machine runs on UTC so as to remove any issues related to the
> zones.
> 
> From here: https://en.wikipedia.org/wiki/Leap_second,
> 
> There have been 27 leap seconds added to UTC since 1972.
> 
> 
> But, when I run this fiddle (see bottom of this email link)
> 
> https://dbfiddle.uk/wxvmzfJb
> 
> (first snippet - 2015 -> 2016) I get a "nice" even number for the EPOCH of,
> 00:00:00 2016 , say (= 1451606400) - now, with 27 leap seconds since 1972, I
> would expect that number to be (something like) 1451606427?
> 
> I thought that the EPOCH was the number of seconds since 1970-01-01
> 00:00:00? Is this incorrect?

The POSIX standard mandates that leap seconds must be ignored. It's not
really "number of seconds since 1970-01-01", but "number of days since
1970-01-01 times 86400 plus number of seconds in the current day".

So you can't use epoch to detect leap seconds.

And I don't think PostgreSQL keeps track of leap seconds internally
either, so that information simply isn't there to begin with.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Content of pg_publication using a local connection versus network connection?

2025-01-27 Thread Shaheed Haque
Hi Adrian,

On Mon, 27 Jan 2025 at 20:51, Adrian Klaver 
wrote:

> On 1/27/25 12:41, Shaheed Haque wrote:
> > Hi,
> >
> > I'm a novice-ish when it comes to Postgres, but I've studied the docs
> > and not been able to understand why I can see the rows in pg_publication
> > via a local psql session, but not when I am connected via the network.
> >
> > Since the network login is (a) successful and (b) can read the content
> > of other non-system tables, I guessed that my problem is row-level
> > security (RLS)except that from the docs, I was unable to see how the
> > login type could affect RLS. What am I missing?
> >
> > Here is some context...please do ask if something else needs to be
> > clarified!
> >
> > - System Postgres 16, AWS RDS version.
> > - The pg_publication tabe looks like this:
> >
> > foo=>  \dpS pg_publication
> >Access privileges
> >Schema   |  Name  | Type  | Access privileges |
> > Column privileges | Policies
> >
> ++---+---+---+--
> > pg_catalog | pg_publication | table | rdsadmin=arwdDxt/rdsadmin+|
> >|
> > ||   | =r/rdsadmin   |
> >|
> >
> >
> > - When I am logged in as this user via psql, I  can see:
>
> This user is rdsadmin or something else?
>

The username is "dbcorexyz". See more  below.


> >
> > foo=> select * from pg_publication;
> >   oid  |  pubname  | pubowner | puballtables | pubinsert |
> > pubupdate | pubdelete | pubtruncate | pubviaroot
> >
> ---+---+--+--+---+---+---+-+
> > 98923 | vm_db_publication |16478 | t| t | t
> >  | t | t   | f
> >
> >
> > - When I connect via psycog, I can read other tables, but pg_publication
> > aways seems to return no rows.
>
> 1) What is your connection string?
> In particular what user are you connecting as?
>

When I use psql, I first have to SSH to an AWS EC2, and then run psql.
Thus, the details in this case are:

   - ssh -i vm_paiyroll.pem awsuser@18.168.196.169
   - foo=> \conninfo

You are connected to database "foo" as user "dbcorexyz" on host "
live-paiyroll-db-c702180bbf.ci22uuz4wz33.eu-west-2.rds.amazonaws.com"
(address "172.31.4.93") at port "5432".
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,
compression: off)

When I connect via pscopg, I first set up an SSH tunnel through the EC2
host, and then connect. Thus the details in this case are:

   -

   bastion_host.ssh_host is '18.168.196.169', bastion_host.ssh_usernme
is awsuser

   - 

I *am* dealing with multiple db connections (am working on some replication
tooling) but AFAICS, both connections are to the same place.

Thanks, Shaheed


>
> 2) Are you sure you are connecting to same database?
>
>
> >
> > Any assistance would be appreciated.
> >
> > Thanks, Shaheed
> >
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Parallel workers via functions?

2025-01-27 Thread Jim Vanns
Apologies for the rushed question ...

If I have a function that is marked 'stable parallel safe' and returns
a table, can a calling function or procedure (marked volatile parallel
unsafe) still take advantage of the parallel workers from the first
function - as the data source. I.e.

func_a(); // selects, returns table, parallel safe
func_b() {
   insert into foo
   select * from func_a(); // Will func_a still execute parallel
workers to fetch the data?
}

Or even if func_b() uses 'create temporary table as select * from
func_a()' and then insert?

I ask because when I simply call func_a() from a psql shell, I see the
parallel workers run and everything is nice and swift. But when called
from a data-modifying function like func_b(), no workers are spawned
:( Even from the read-part of the code.

Are there differences in functions vs. stored procedures that might
affect the behaviour of the planner to disregard workers?

Cheers

Jim

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London




Re: Issues with EPOCH-s, TIMESTAMP(TZ)-s and leap seconds.

2025-01-27 Thread Adrian Klaver

On 1/27/25 13:23, Nem Tudom wrote:

Reply to list also.
Ccing list.

See post from Peter Holzer .




Hi Adrian, all,



Any help, advice, recommendations, URL-s, references &c. appreciated.



https://www.postgresql.org/docs/current/functions-datetime.html
https://www.postgresql.org/docs/current/view-pg-timezone-names.html


" (Technically, PostgreSQL does not use UTC because leap seconds are 
not handled.)"


Well, that was sweet and to the point! :-) Thanks for your input!

Does this cause any issues interfacing with other systems?

Hmmm... it appears not:

https://dba.stackexchange.com/questions/105514/leap-second-in-database-system-postgresql-and-sql-server

and

https://stackoverflow.com/questions/31136211/how-to-handle-leap-seconds-in-oracle

So, it would appear that neither Oracle nor Microsoft (RDBMS or OS/SQL 
Server) worry too much about this, so it would appear that I shouldn't 
either.


However, I think this means that there's 27 seconds of my life that I 
never knew I had...


Thanks again, saves me some work, and rgs,


E...






--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Content of pg_publication using a local connection versus network connection?

2025-01-27 Thread Adrian Klaver

On 1/27/25 13:34, Shaheed Haque wrote:

Hi Adrian,

On Mon, 27 Jan 2025 at 20:51, Adrian Klaver > wrote:


On 1/27/25 12:41, Shaheed Haque wrote:
 > Hi,
 >
 > I'm a novice-ish when it comes to Postgres, but I've studied the
docs
 > and not been able to understand why I can see the rows in
pg_publication
 > via a local psql session, but not when I am connected via the
network.
 >
 > Since the network login is (a) successful and (b) can read the
content
 > of other non-system tables, I guessed that my problem is row-level
 > security (RLS)except that from the docs, I was unable to see
how the
 > login type could affect RLS. What am I missing?
 >
 > Here is some context...please do ask if something else needs to be
 > clarified!
 >
 > - System Postgres 16, AWS RDS version.
 > - The pg_publication tabe looks like this:
 >
 > foo=>  \dpS pg_publication
 >    Access privileges
 >    Schema   |  Name  | Type  | Access privileges |
 > Column privileges | Policies
 >

++---+---+---+--
 > pg_catalog | pg_publication | table | rdsadmin=arwdDxt/rdsadmin+|
 >    |
 >     |    |   | =r/rdsadmin   |
 >    |
 >
 >
 > - When I am logged in as this user via psql, I  can see:

This user is rdsadmin or something else?


The username is "dbcorexyz". See more  below.

 >
 > foo=> select * from pg_publication;
 >   oid  |  pubname  | pubowner | puballtables | pubinsert |
 > pubupdate | pubdelete | pubtruncate | pubviaroot
 >

---+---+--+--+---+---+---+-+
 > 98923 | vm_db_publication |    16478 | t    | t | t
 >  | t | t   | f
 >
 >
 > - When I connect via psycog, I can read other tables, but
pg_publication
 > aways seems to return no rows.

1) What is your connection string?
     In particular what user are you connecting as?


When I use psql, I first have to SSH to an AWS EC2, and then run psql. 
Thus, the details in this case are:


  * ssh -i vm_paiyroll.pem awsuser@18.168.196.169

  * foo=> \conninfo 

You are connected to database "foo" as user "dbcorexyz" on host 
"live-paiyroll-db-c702180bbf.ci22uuz4wz33.eu-west-2.rds.amazonaws.com 
" (address "172.31.4.93") at port "5432".
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, 
compression: off)


When I connect via pscopg, I first set up an SSH tunnel through the EC2 
host, and then connect. Thus the details in this case are:


  *

bastion_host.ssh_host is '18.168.196.169', bastion_host.ssh_usernme 
isawsuser

  * 

I *am* dealing with multiple db connections (am working on some 
replication tooling) but AFAICS, both connections are to the same place.




Are you sure?

From psql connection:

You are connected to database "foo" as user "dbcorexyz" on host 
"live-paiyroll-db-c702180bbf.ci22uuz4wz33.eu-west-2.rds.amazonaws.com" 
(address "172.31.4.93")


Note host of 172.31.4.93

In psycopg2 case you again connect to 18.168.196.169 for SSH but then:

(host=localhost ...)

I'm not seeing localhost being equal to 172.31.4.93.



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Content of pg_publication using a local connection versus network connection?

2025-01-27 Thread Shaheed Haque
On Mon, 27 Jan 2025 at 21:54, Adrian Klaver 
wrote:

> On 1/27/25 13:34, Shaheed Haque wrote:
> > Hi Adrian,
> >
> > On Mon, 27 Jan 2025 at 20:51, Adrian Klaver  > > wrote:
> >
> > On 1/27/25 12:41, Shaheed Haque wrote:
> >  > Hi,
> >  >
> >  > I'm a novice-ish when it comes to Postgres, but I've studied the
> > docs
> >  > and not been able to understand why I can see the rows in
> > pg_publication
> >  > via a local psql session, but not when I am connected via the
> > network.
> >  >
> >  > Since the network login is (a) successful and (b) can read the
> > content
> >  > of other non-system tables, I guessed that my problem is row-level
> >  > security (RLS)except that from the docs, I was unable to see
> > how the
> >  > login type could affect RLS. What am I missing?
> >  >
> >  > Here is some context...please do ask if something else needs to be
> >  > clarified!
> >  >
> >  > - System Postgres 16, AWS RDS version.
> >  > - The pg_publication tabe looks like this:
> >  >
> >  > foo=>  \dpS pg_publication
> >  >Access privileges
> >  >Schema   |  Name  | Type  | Access privileges |
> >  > Column privileges | Policies
> >  >
> >
>  
> ++---+---+---+--
> >  > pg_catalog | pg_publication | table | rdsadmin=arwdDxt/rdsadmin+|
> >  >|
> >  > ||   | =r/rdsadmin   |
> >  >|
> >  >
> >  >
> >  > - When I am logged in as this user via psql, I  can see:
> >
> > This user is rdsadmin or something else?
> >
> >
> > The username is "dbcorexyz". See more  below.
> >
> >  >
> >  > foo=> select * from pg_publication;
> >  >   oid  |  pubname  | pubowner | puballtables | pubinsert |
> >  > pubupdate | pubdelete | pubtruncate | pubviaroot
> >  >
> >
>  
> ---+---+--+--+---+---+---+-+
> >  > 98923 | vm_db_publication |16478 | t| t |
> t
> >  >  | t | t   | f
> >  >
> >  >
> >  > - When I connect via psycog, I can read other tables, but
> > pg_publication
> >  > aways seems to return no rows.
> >
> > 1) What is your connection string?
> >  In particular what user are you connecting as?
> >
> >
> > When I use psql, I first have to SSH to an AWS EC2, and then run psql.
> > Thus, the details in this case are:
> >
> >   * ssh -i vm_paiyroll.pem awsuser@18.168.196.169
> > 
> >   * foo=> \conninfo
> >
> > You are connected to database "foo" as user "dbcorexyz" on host
> > "live-paiyroll-db-c702180bbf.ci22uuz4wz33.eu-west-2.rds.amazonaws.com
> > <
> http://live-paiyroll-db-c702180bbf.ci22uuz4wz33.eu-west-2.rds.amazonaws.com>"
> (address "172.31.4.93") at port "5432".
> > SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,
> > compression: off)
> >
> > When I connect via pscopg, I first set up an SSH tunnel through the EC2
> > host, and then connect. Thus the details in this case are:
> >
> >   *
> >
> > bastion_host.ssh_host is '18.168.196.169', bastion_host.ssh_usernme
> isawsuser
> >
> >   *  > database=foo) at 0x7f6bfd554a90>
> >
> > I *am* dealing with multiple db connections (am working on some
> > replication tooling) but AFAICS, both connections are to the same place.
> >
>
> Are you sure?
>
>  From psql connection:
>
> You are connected to database "foo" as user "dbcorexyz" on host
> "live-paiyroll-db-c702180bbf.ci22uuz4wz33.eu-west-2.rds.amazonaws.com"
> (address "172.31.4.93")
>
> Note host of 172.31.4.93
>
> In psycopg2 case you again connect to 18.168.196.169 for SSH but then:
>
> (host=localhost ...)
>
> I'm not seeing localhost being equal to 172.31.4.93.
>

Erk. I think you may have got it. I will go examine my navel...and the
code. Many thanks for the quick and kind help.

Shaheed


>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Issues with EPOCH-s, TIMESTAMP(TZ)-s and leap seconds.

2025-01-27 Thread Tom Lane
"Peter J. Holzer"  writes:
> On 2025-01-27 21:01:59 +, Nem Tudom wrote:
>> I thought that the EPOCH was the number of seconds since 1970-01-01
>> 00:00:00? Is this incorrect?

> The POSIX standard mandates that leap seconds must be ignored. It's not
> really "number of seconds since 1970-01-01", but "number of days since
> 1970-01-01 times 86400 plus number of seconds in the current day".

I'm not sure what POSIX says about this, but that is the definition
Postgres uses --- and we won't let you select a timezone setting
that does account for leap seconds.

postgres=# set timezone = 'America/New_York';
SET
postgres=# set timezone = 'right/America/New_York';
ERROR:  time zone "right/America/New_York" appears to use leap seconds
DETAIL:  PostgreSQL does not support leap seconds.

If we did support that, it'd enormously complicate all timestamp
arithmetic --- and we could hardly do calculations with times in
the future at all, given the uncertainty around when leap seconds
will be declared.  So if you want to do astronomical timekeeping,
you should use some other data type than timestamptz.

regards, tom lane




Re: Issues with EPOCH-s, TIMESTAMP(TZ)-s and leap seconds.

2025-01-27 Thread Thomas Munro
On Tue, Jan 28, 2025 at 10:02 AM Nem Tudom  wrote:
> Any help, advice, recommendations, URL-s, references &c. appreciated.

As others have said, we're using the POSIX AKA Unix time scale, as
almost all general purpose computer systems do.  It's based on the UTC
time scale (the one that has SI seconds of fixed duration defined by
caesium atoms, with extra seconds inserted by committee that should be
displayed as eg 23:59:61 as required to stay within a certain
tolerance of the variable-duration seconds implied by the earth's
actual rotation divided by 86400, known as UT1 or something like
that), except that in POSIX the leap seconds are ignored.  There is a
sort of discontinuous jump, or you might say that the second is
compressed to a duration of 0.  Systems that have good reasons to care
about this stuff often use the TAI time scale (also SI seconds, but
with no leap seconds and thus slowly falling out of sync with the
earth's rotations), or the GPS time scale which is the same except
offset by the number of leap seconds that had been decreed as of 1980
when they invented it and ignoring all new leap seconds after that.
You need an up-to-date table of leap seconds to convert between time
scales, and of course it'd be lossy on eg TAI->POSIX conversions, but
not the reverse.

I showed the bones of how you could do this in SQL here:

https://www.postgresql.org/message-id/CA%2BhUKGLU9Don4YHnfdzn0eeWQsUu8GJDaLiUAefLLT6%3DmmeGoQ%40mail.gmail.com

The IERS inserts leap seconds at times that are not expected to
interfere with business, so most people just don't care and the POSIX
time scale is good enough.  That's not always entirely successful:
I've forgotten all the details but once a leap second was inserted at
the moment the Japanese stock market opened, leading to confusion
(looking at the table[1] I think it must have been one of the June
ones where the 30th fell on a business day).  As for how these jumps
in the time scale really happen, there are various approaches
including "smearing" the extra second over a period of time (ie making
the neighbouring seconds shorter for a window of time) so that POSIX
time drifts towards being in sync with UTC over a couple of hours or
something; that works about as well as you'd expect with many
different NTP (etc) implementations using different approaches that
only rarely test these transitions, but again good enough for most
stuff.

The powers that be have agreed to stop adding UTC leap seconds after
2035, so UTC will eventually cease to be "coordinated" (the C) going
forward, and have a fixed offset against TAI and GPS.  The leap second
table will effectively be fixed and only of interest for dealing with
historical times 1972-2035.  And just like TAI and GPS, it'll begin to
drift out of sync with the earth's rotations without further
adjustments, since it's based on SI seconds and the earth is a
spinning chunk of wobbly stardust.

(My memory of all that might be a little fuzzy and I know zilch about
the science of it, but a couple of decades ago I worked on software
that talked to a lot of stock exchanges and we had to worry about when
certain things happened and think about smearing etc.  In practice
time zones were a far bigger source of stress... I recall a local
government suddenly declaring a daylight savings change to suit a
sporting event, etc...)

[1] https://en.wikipedia.org/wiki/Leap_second




Re: Parallel workers via functions?

2025-01-27 Thread Laurenz Albe
On Mon, 2025-01-27 at 18:08 +, Jim Vanns wrote:
> If I have a function that is marked 'stable parallel safe' and returns
> a table, can a calling function or procedure (marked volatile parallel
> unsafe) still take advantage of the parallel workers from the first
> function - as the data source. I.e.
> 
> func_a(); // selects, returns table, parallel safe
> func_b() {
>insert into foo
>select * from func_a(); // Will func_a still execute parallel
> workers to fetch the data?
> }
> 
> Or even if func_b() uses 'create temporary table as select * from
> func_a()' and then insert?
> 
> I ask because when I simply call func_a() from a psql shell, I see the
> parallel workers run and everything is nice and swift. But when called
> from a data-modifying function like func_b(), no workers are spawned
> :( Even from the read-part of the code.
> 
> Are there differences in functions vs. stored procedures that might
> affect the behaviour of the planner to disregard workers?

See https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html

The problem here is the INSERT.  Data modifying statements won't use
parallel query.

There are exceptions: CREATE TABLE ... AS SELECT ... should be able
to use parallel query.

Yours,
Laurenz Albe




Re: Alter table fast

2025-01-27 Thread Gus Spier
I don't think I qualify as an expert, but I can nominate a course of action
for you to consider.

Rather than convert an existing column from one data type to another, might
it not be easier to simply recreate the table with the correct data type.
There are  caveats!

You have to consider any referential integrity constraints.
You have to have a LOT of available disk space.
You really want to do this in batches.

If you can swing those issues, you might have a shot.

 1. CREATE TABLE  LIKE ;
2. ALTER  ALTER COLUMN  TYPE ;
3. INSERT INTO  SELECT 
-- recommend you do this in batches
4. DROP TABLE ;
5. ALTER TABLE  RENAME to 

Regards,
Gus

On Thu, Jan 9, 2025 at 11:27 AM Ron Johnson  wrote:

> On Thu, Jan 9, 2025 at 11:25 AM veem v  wrote:
>
>> Hello,
>> It's postgres version 16.1, we want to convert an existing column data
>> type from integer to numeric and it's taking a long time. The size of the
>> table is ~50GB and the table has ~150million rows in it and it's not
>> partitioned. We tried running the direct alter and it's going beyond hours,
>> so wanted to understand from experts what is the best way to achieve this?
>>
>>
> Out of curiosity, why NUMERIC(15,0) instead of BIGINT?
>
> --
> Death to , and butter sauce.
> Don't boil me, I'm still alive.
>  lobster!
>