Query regarding new version of shared library with older binary

2024-04-03 Thread M Tarkeshwar Rao
Hi All,

I have following query:

Our application is compiled with postgres SQL Client C++(11.18) and it is 
dynamically linked with postgres shared library(version libpq.so.5.11).
But where this application runs at customer production system new version of 
libpq(libpq.so.5.13) available. This new libpq shared lib is packaged with OS.
Our query is that, will our binary able to resolve the symbols with this new 
version(5.13)?

Regards,
Tarkeshwar


Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-04-03 Thread Nick Renders
On 29 Mar 2024, at 4:25, Thomas Munro wrote:
>
> I don't have any specific ideas and I have no idea what "ignore
> ownership" means ... what kind of filesystem is running on it?  For
> the simple SSD, is it directly connected, running a normal Apple APFS
> filesystem, or something more complicated?
>
> I wonder if this could be related to the change in 16 which started to
> rename that file:
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d8cd0c6c95c0120168df93aae095df4e0682a08a
>
> Did you ever run 15 or earlier on that system?


In the macOS Finder, when you show the Info (command+i) for an external drive 
(or any partition that is not the boot drive), there is a checkbox "Ignore 
ownership on this volume" in the Permissions section. I think it is by default 
"on" for external drives.

The external SSD is an Orico drive that is connected with USB-C. It is 
initialised as a GUID Partition Map with a single AFPS partition.

We have run PostgreSQL 15 and earlier, before upgrading to 16 when it came out 
last year. We didn't have any problems with 16 until recently, after upgrading 
to Sonoma.


Nick




Re: Timestamp conversion Error in dynamic sql script

2024-04-03 Thread sud
This one worked. Thank you so much.

On Wed, Apr 3, 2024 at 2:27 AM Erik Wienhold  wrote:

> On 2024-04-02 22:08 +0200, sud wrote:
> > On Tue, Apr 2, 2024 at 7:46 PM Greg Sabino Mullane 
> > wrote:
> >
> > Now this block seems to be failing near the "LIKE" operator. Isn't it
> > allowed to add the check constraints along with the CREATE TABLE
> statement?
> >
> > [...]
> >
> > EXECUTE format('
> > CREATE TABLE parent_table_%s (
> > CHECK (partition_key >=  ''%s'' AND partition_key <  ''%s'' )
> >* ) LIKE (parent_table including all);',*
> >TO_CHAR(start_date + make_interval(days=>i),'_MM_DD'),
> > (start_date + make_interval(days=>i))::timestamptz ,
> > (start_date + make_interval(days=>i))::timestamptz
> > );
> >
> > [...]
> >
> > *SQL Error [42601]: ERROR: syntax error at or near "LIKE"Where: PL/pgSQL
> > function inline_code_block line 8 at EXECUTE*
> > *Error position: *
>
> The LIKE clause goes inside the parenthesis along with the column and
> constraint definitions, i.e.:
>
> CREATE TABLE parent_table__mm_dd (
> LIKE parent_table,
> CHECK (...)
> );
>
> --
> Erik
>


Moving delta data faster

2024-04-03 Thread yudhi s
 Hi All,
 It's postgresql database version 15.4. We have a requirement in which we
will be initially moving full table data for 3-4 tables, from source
database to target(i.e. postgres) . Maximum number of rows will be
~10million rows in those tables. Then subsequently these rows will be
inserted/updated based on the delta number of rows that got
inserted/updated in the source database. In some cases these changed data
can flow multiple times per day to the downstream i.e. postgres database
and in other cases once daily.

 Want to understand , if we should use upsert(insert on conflict) or merge
statements or anything else in such a scenario so as to persist those delta
records faster in the target database, while making the system online to
the users?

Regards
Yudhi


Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-04-03 Thread Thomas Munro
On Thu, Apr 4, 2024 at 3:11 AM Nick Renders  wrote:
> In the macOS Finder, when you show the Info (command+i) for an external drive 
> (or any partition that is not the boot drive), there is a checkbox "Ignore 
> ownership on this volume" in the Permissions section. I think it is by 
> default "on" for external drives.

Hmm.  Sounds suspicious, but why would only this file be affected?

> The external SSD is an Orico drive that is connected with USB-C. It is 
> initialised as a GUID Partition Map with a single AFPS partition.
>
> We have run PostgreSQL 15 and earlier, before upgrading to 16 when it came 
> out last year. We didn't have any problems with 16 until recently, after 
> upgrading to Sonoma.

Interesting.  So the rename might have something to do with it, though
I don't have a theory for how,.

Can you show what the permissions and ownership looks like for pg_*
under there, normally, and once the system reaches this state?
Something like:

tmunro@phonebox postgresql % ls -slap pgdata/global/pg_*
16 -rw---  1 tmunro  staff  8192  4 Apr 09:50 pgdata/global/pg_control
 8 -rw---  1 tmunro  staff   524  4 Apr 09:50 pgdata/global/pg_filenode.map

I'm asking for "pg_*" because I want to see pg_control as well, to
understand the permissions for the other files in the cluster, and
because I want to see if there are any stray remnants of a temporary
file, which would be called pg_filenode.map.tmp.




Re: Moving delta data faster

2024-04-03 Thread Adrian Klaver

On 4/3/24 13:38, yudhi s wrote:

  Hi All,
  It's postgresql database version 15.4. We have a requirement in which 
we will be initially moving full table data for 3-4 tables, from source 
database to target(i.e. postgres) . Maximum number of rows will be 
~10million rows in those tables. Then subsequently these rows will be 
inserted/updated based on the delta number of rows that got 
inserted/updated in the source database. In some cases these changed 
data can flow multiple times per day to the downstream i.e. postgres 
database and in other cases once daily.


What is the source database?

Can it be reached with a FDW?:

https://wiki.postgresql.org/wiki/Foreign_data_wrappers

Can the delta on the source be output as CSV?



  Want to understand , if we should use upsert(insert on conflict) or 
merge statements or anything else in such a scenario so as to persist 
those delta records faster in the target database, while making the 
system online to the users?


Regards
Yudhi


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-04-03 Thread Thomas Munro
On Sat, Mar 23, 2024 at 3:01 AM Nick Renders  wrote:
> We now have a second machine with this issue: it is an Intel Mac mini running 
> macOS Sonoma (14.4) and PostgreSQL 16.2.
> This one only has a single Data directory, so there are no multiple instances 
> running.

BTW if you're running databases on mains-powered Macs, I have a patch
that you might be interested in, which so far hasn't attracted any
reviews.  The short version is that I bet you can at least lose many
seconds of commits (because WAL doesn't really hit durable part of
disk), and possibly also fail to recover (pg_control hits disk before
WAL, not sure if this is really possible), if you yank the power and
you're using the default settings for wal_sync_method.  I'd like to
rationalise the settings for that stuff and make it safe by default.

I don't know anything about the USB storage pathway but I'd be
surprised if it's different.

https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BF0EL4Up6yVYbbcWse4xKaqW4wc2xpw67Pq9FjmByWVg%40mail.gmail.com




Re: recovery.signal not being removed when recovery complete

2024-04-03 Thread Michael Paquier
On Tue, Mar 26, 2024 at 06:22:32PM -0400, Isaac Morland wrote:
> I use a script to restore a backup to create a testing copy of the
> database. I set the following in postgresql.auto.conf:
> 
> recovery_target = 'immediate'
> recovery_target_action = 'promote'

Why not, after a pg_basebackup -R I assume.  Would you mind sharing
your commands?

> In the logs I get "recovery stopping after reaching consistency" then a
> moment later "database system is ready to accept read-only connections",
> then some entries about restoring log files, then "database system is ready
> to accept connections".

If you have some logs, that could help as well.

> I am able to make changes (e.g. CREATE TABLE), yet recovery.signal is still
> present. My understanding is that recovery.signal should be removed when
> recovery is finished (i.e., more or less when "database system is ready to
> accept connections" is logged?), unless recovery_target_action is set to
> 'shutdown'.
> 
> Any ideas? Even just confirming/denying I understand the above correctly
> would help.

Not removing the two .signal files when promotion is achieved would be
a problem to me because we'd reenter recovery again at a follow-up
startup.  ArchiveRecoveryRequested should be set if there was either
recovery.signal or standby.signal found at startup, meaning that we
should have a TLI jump at promotion with a physical removal of both
files and a LOG for a "selected new timeline ID".
--
Michael


signature.asc
Description: PGP signature


Re: Moving delta data faster

2024-04-03 Thread yudhi s
On Thu, Apr 4, 2024 at 2:41 AM Adrian Klaver 
wrote:

> On 4/3/24 13:38, yudhi s wrote:
> >   Hi All,
> >   It's postgresql database version 15.4. We have a requirement in which
> > we will be initially moving full table data for 3-4 tables, from source
> > database to target(i.e. postgres) . Maximum number of rows will be
> > ~10million rows in those tables. Then subsequently these rows will be
> > inserted/updated based on the delta number of rows that got
> > inserted/updated in the source database. In some cases these changed
> > data can flow multiple times per day to the downstream i.e. postgres
> > database and in other cases once daily.
>
> What is the source database?
>
> Can it be reached with a FDW?:
>
> https://wiki.postgresql.org/wiki/Foreign_data_wrappers
>
> Can the delta on the source be output as CSV?
>
>

Thank you Adrian.

And one thing i forgot to mention this target postgresql database would be
on AWS RDS whereas the source Oracle databases is on premise. I think we
don't have the FDW extension currently in place but we can get that. I am
just not able to understand clearly  though, but do you mean export the
data from source using CSV and do truncate and import on target. And as
these data will be moved through the network won't that cause slowness?

The source database here is Oracle database. Correct me if wrong, it looks
like foreign data wrapper is like a DB link. Or do you mean writing a query
on the target database (which can be UPSERT or MERGE) but will be joining
the table from the source database through the DBlink/DDW? But my question
was whether we should use UPSERT or MERGE for comparing and loading the
delta records to the target postgresql database. Want to understand which
is more performant , as I see in the past Merge having performance issues
in the past, but not very sure about that.


Re: Moving delta data faster

2024-04-03 Thread Adrian Klaver

On 4/3/24 20:54, yudhi s wrote:
On Thu, Apr 4, 2024 at 2:41 AM Adrian Klaver > wrote:


On 4/3/24 13:38, yudhi s wrote:
 >   Hi All,
 >   It's postgresql database version 15.4. We have a requirement in
which
 > we will be initially moving full table data for 3-4 tables, from
source
 > database to target(i.e. postgres) . Maximum number of rows will be
 > ~10million rows in those tables. Then subsequently these rows
will be
 > inserted/updated based on the delta number of rows that got
 > inserted/updated in the source database. In some cases these changed
 > data can flow multiple times per day to the downstream i.e. postgres
 > database and in other cases once daily.

What is the source database?

Can it be reached with a FDW?:

https://wiki.postgresql.org/wiki/Foreign_data_wrappers


Can the delta on the source be output as CSV?



Thank you Adrian.

And one thing i forgot to mention this target postgresql database would 
be on AWS RDS whereas the source Oracle databases is on premise. I think 
we don't have the FDW extension currently in place but we can get that. 
I am just not able to understand clearly  though, but do you mean export 
the data from source using CSV and do truncate and import on target. And 
as these data will be moved through the network won't that cause slowness?


The source database here is Oracle database. Correct me if wrong, it 
looks like foreign data wrapper is like a DB link. Or do you mean 
writing a query on the target database (which can be UPSERT or MERGE) 
but will be joining the table from the source database through the 
DBlink/DDW? But my question was whether we should use UPSERT or MERGE 
for comparing and loading the delta records to the target postgresql 
database. Want to understand which is more performant , as I see in the 
past Merge having performance issues in the past, but not very sure 
about that.


My motivation was to get some basic information about your setup and 
what you are trying to achieve.


If I understand correctly you have:

1) An Oracle database with tables that you want to copy the complete 
data from to a Postgres database. For this sort of thing 
COPY(https://www.postgresql.org/docs/current/sql-copy.html) on the 
Postgres end using CSV data generated from the source is probably the 
quickest bulk load method.


2) After the initial load you want to do follow up INSERT/UPDATEs based 
on a delta of the source tables relative to the initial load. This is 
still a bit of mystery to me. How are determining the delta: a) On the 
source end entirely or b) Target relative to source? Also what is the 
anticipated size of the delta per transfer?


Additional information needed:

1) Network distance between source and target?

2) Network capacity?

3) Expected load on both source and target servers from other operations?

--
Adrian Klaver
adrian.kla...@aklaver.com





Failure of postgres_fdw because of TimeZone setting

2024-04-03 Thread Adnan Dautovic
Hi everyone,


I have some trouble using postgres_fdw in order to display some data from a 
Postgres database I do not control in a Postgres database that I do control. I 
filled out the form from the wiki below and would appreciate any tips.


* A description of what you are trying to achieve and what results you expect.:
I am trying to import the public schema of a Postgres instance I do not control 
(I will call it "remote"), but have read-only access to, into a Postgres 
instance I fully control (I will call it "local"), using the foreign data 
wrapper postgres_fdw.
The result I expect is that the import runs successfully and that I have access 
to the foreign tables.
However, I currently fail to achieve this. I always get an error message 
regarding the TimeZone setting. I also tried, without success, different 
versions for the local instance, namely 9.4.14 so I would be as close as 
possible to the remote version and 13.14 to try an up-to-date version of 
Postgres 13.

* PostgreSQL version number you are running:

remote: "PostgreSQL 9.4.13 on x86_64-apple-darwin20.2.0, compiled by Apple 
clang version 12.0.0 (clang-1200.0.32.29), 64-bit"

local: "PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit"

* How you installed PostgreSQL:

remote: I do not know. It is an instance that runs as part of a software I have 
no further insight in.

local: I am using the official PostgreSQL docker image.

* Changes made to the settings in the postgresql.conf file:

remote:
"name" "current_setting" "source"
"application_name" "pgAdmin 4 - CONN:2930678" "client"
"bdr.log_conflicts_to_table" "on" "configuration file"
"bytea_output" "hex" "session"
"checkpoint_completion_target" "0.9" "configuration file"
"checkpoint_segments" "32" "configuration file"
"client_encoding" "UNICODE" "session"
"client_min_messages" "notice" "session"
"DateStyle" "ISO, MDY" "session"
"default_text_search_config" "pg_catalog.english" "configuration file"
"dynamic_shared_memory_type" "posix" "configuration file"
"effective_cache_size" "19046MB" "configuration file"
"lc_messages" "C" "configuration file"
"lc_monetary" "C" "configuration file"
"lc_numeric" "C" "configuration file"
"lc_time" "C" "configuration file"
"listen_addresses" "*" "configuration file"
"log_line_prefix" "%t [%d]" "configuration file"
"log_timezone" "Europe/Berlin" "configuration file"
"logging_collector" "on" "configuration file"
"maintenance_work_mem" "1587MB" "configuration file"
"max_connections" "100" "configuration file"
"max_replication_slots" "10" "configuration file"
"max_stack_depth" "2MB" "environment variable"
"max_wal_senders" "10" "configuration file"
"max_worker_processes" "10" "configuration file"
"port" "5432" "configuration file"
"shared_buffers" "7936MB" "configuration file"
"ssl" "on" "configuration file"
"ssl_ca_file" "root.crt" "configuration file"
"ssl_cert_file" "server.crt" "configuration file"
"ssl_key_file" "server.key" "configuration file"
"tcp_keepalives_idle" "10" "configuration file"
"TimeZone" "Europe/Berlin" "configuration file"
"track_commit_timestamp" "on" "configuration file"
"wal_level" "logical" "configuration file"
"wal_receiver_timeout" "20min" "configuration file"
"wal_sender_timeout" "20min" "configuration file"
"wal_sync_method" "fsync_writethrough" "configuration file"
"work_mem" "81264kB" "configuration file"

local:
"name" "current_setting" "source"
"application_name" "pgAdmin 4 - CONN:511122" "client"
"bytea_output" "hex" "session"
"client_encoding" "UNICODE" "session"
"client_min_messages" "notice" "session"
"DateStyle" "ISO, MDY" "session"
"default_text_search_config" "pg_catalog.english" "configuration file"
"dynamic_shared_memory_type" "posix" "configuration file"
"lc_messages" "en_US.utf8" "configuration file"
"lc_monetary" "en_US.utf8" "configuration file"
"lc_numeric" "en_US.utf8" "configuration file"
"lc_time" "en_US.utf8" "configuration file"
"listen_addresses" "*" "configuration file"
"log_timezone" "Etc/UTC" "configuration file"
"max_connections" "100" "configuration file"
"max_stack_depth" "2MB" "environment variable"
"max_wal_size" "1GB" "configuration file"
"min_wal_size" "80MB" "configuration file"
"shared_buffers" "128MB" "configuration file"
"TimeZone" "Etc/UTC" "configuration file"

* Operating system and version:

remote: Mac mini (2018), running macOS Monterey 12.5.1, output of uname -a: 
"Darwin mac 21.6.0 Darwin Kernel Version 21.6.0: Wed Aug 10 14:25:27 PDT 2022; 
root:xnu-8020.141.5-2/RELEASE_X86_64 x86_64

local: A Linux VM running Debian 11.3 ARM64, output of uname -a: "Linux 
debian-gnu-linux-11 5.10.0-28-arm64 #1 SMP Debian 5.10.209-2 (2024-01-31) 
aarch64 GNU/Linux"

* What program you're using to connect to PostgreSQL:
pgAdmin 4
 
* Is there anything relevant or unusual in the PostgreSQL server logs?:
No, not that I know of.
 
* What you were doing when the error happened / how to cause the error:
First, I successfully enabled the pos

Re: Moving delta data faster

2024-04-03 Thread yudhi s
On Thu, Apr 4, 2024 at 10:16 AM Adrian Klaver 
wrote:

> On 4/3/24 20:54, yudhi s wrote:
> > On Thu, Apr 4, 2024 at 2:41 AM Adrian Klaver  > > wrote:
> >
> >> Thank you Adrian.
> >
> > And one thing i forgot to mention this target postgresql database would
> > be on AWS RDS whereas the source Oracle databases is on premise. I think
> > we don't have the FDW extension currently in place but we can get that.
> > I am just not able to understand clearly  though, but do you mean export
> > the data from source using CSV and do truncate and import on target. And
> > as these data will be moved through the network won't that cause
> slowness?
> >
> > The source database here is Oracle database. Correct me if wrong, it
> > looks like foreign data wrapper is like a DB link. Or do you mean
> > writing a query on the target database (which can be UPSERT or MERGE)
> > but will be joining the table from the source database through the
> > DBlink/DDW? But my question was whether we should use UPSERT or MERGE
> > for comparing and loading the delta records to the target postgresql
> > database. Want to understand which is more performant , as I see in the
> > past Merge having performance issues in the past, but not very sure
> > about that.
>
> My motivation was to get some basic information about your setup and
> what you are trying to achieve.
>
> If I understand correctly you have:
>
> 1) An Oracle database with tables that you want to copy the complete
> data from to a Postgres database. For this sort of thing
> COPY(https://www.postgresql.org/docs/current/sql-copy.html) on the
> Postgres end using CSV data generated from the source is probably the
> quickest bulk load method.
>
> 2) After the initial load you want to do follow up INSERT/UPDATEs based
> on a delta of the source tables relative to the initial load. This is
> still a bit of mystery to me. How are determining the delta: a) On the
> source end entirely or b) Target relative to source? Also what is the
> anticipated size of the delta per transfer?
>
> Additional information needed:
>
> 1) Network distance between source and target?
>
> 2) Network capacity?
>
> 3) Expected load on both source and target servers from other operations?
>
>
Thank you. Actually I was trying to understand how to cater the delta load
after the one time load is done . The delta change in records is planned to
be found based on the primary keys on the tables. If it found the key it
will update the records if it does not find the keys it will insert the
rows.

Basically the select query from the source database will fetch the data
with a certain time interval(based on the latest update timestamp or create
timestamp if they are available or else full dump) and put it on S3 and
then from the S3 it will be picked and gets merged to the target postgres
database. As upsert and merge both were looking similar , so was wondering
what we should use here for loading the delta records?


Re: Failure of postgres_fdw because of TimeZone setting

2024-04-03 Thread Tom Lane
Adnan Dautovic  writes:
> I have some trouble using postgres_fdw in order to display some data from a 
> Postgres database I do not control in a Postgres database that I do control.

Hmm ...

> * PostgreSQL version number you are running:

> remote: "PostgreSQL 9.4.13 on x86_64-apple-darwin20.2.0, compiled by Apple 
> clang version 12.0.0 (clang-1200.0.32.29), 64-bit"

> local: "PostgreSQL 13.0 (Debian 13.0-1.pgdg100+1) on x86_64-pc-linux-gnu, 
> compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit"

You realize of course that PG 9.4.x is four years past EOL, and that
the last release in that series was 9.4.26, so that your remote is
missing three or so years' worth of bug fixes even before its EOL.
The underlying macOS platform looks a bit hoary as well.

(You gain exactly zero points for good maintenance practice on the
local side either, since PG 13's current release is 13.14.  If you're
going to install Postgres and then ignore bug-fix releases for
multiple years, I counsel not starting from a dot-zero release.
However, that doesn't seem to be related to your immediate problem.)

Having said that,

> * The EXACT TEXT of the error message you're getting, if there is one:
> ERROR:  invalid value for parameter "TimeZone": "UTC"
> CONTEXT:  remote SQL command: SET timezone = 'UTC'

That's just bizarre.  There is no release of Postgres anywhere,
at any time in the past couple decades, that should not think that
"UTC" is a valid timezone setting.  My best guess is that the
remote was built with a --with-system-tzdata setting that's not
actually valid for its platform.

Short answer: your remote database is very incompetently
administrated.  If the remote's DBA is not willing to work on fixing
it, I suggest finding a job where you don't have to deal with that.

regards, tom lane