Re: How to interpret 'depends on' errors in pg_restore?

2024-05-02 Thread Fire Emerald
I didn't used pg_dump/restore until today and finally found my mistake 
which lead to the "problem" described below.


The output "depends on" comes from the -l (l as Lima) flag, what i wanted 
was the -1 (number one) flag, which stands for single transaction in 
pg_restore. As -l does not execute anything, nothing was logged in the 
postgres server log and none error was shown anywhere.


Both chars looked so identical in my editors/shells that i thought i used 
-1, in fact using -l.


It's always the tiny obvious thing, which we do not see.

Best regards,
Chris

Am 28. März 2024 16:57:04 schrieb Fire Emerald :

Am 28. März 2024 15:00:06 schrieb Tom Lane :


Fire Emerald  writes:

Then i did a pg_restore -d target --verbose -Fc file.dump and saw in the
output this:



5145 0 730750 TABLE subpartitions backends_y2024w03 userA
;depends on: 237
 and so on ...


That is not an error, it's just verbose display of one of the items
in the dump.


Well, I know it's not an error, but it's everything i got. There was no 
error shown. The command completed, but without anything imported.





Nothing was restored.


You would need to show us the actual errors.  (Suggestion:
leave off --verbose, it's just clutter.)  A guess though is
that the import failed because of foreign key constraints.
--data-only mode is not good at ordering the table loads to
ensure that FK constraints are satisfied on-the-fly.

 regards, tom lane


As i said, the same import but with INSERT INTOs worked without any issues. 
So no, there are no FK constraints failing.


But the target and source table had partitioned tables attached, using 
ATTACH PARTITION.


The schema was like:
db1 schema1 public table1 (links to the listed below)
db1 schema1 subpartitions backends_y2024w03
db1 schema1 subpartitions backends_y2024w04
db1 schema1 subpartitions backends_y2024w05

The partitioning must be the problem somehow.




Re: Linked directory or explicit reference

2024-05-02 Thread Ron Johnson
On Thu, May 2, 2024 at 12:50 AM Senor Cervesa 
wrote:
[snip]

>  I'm not sure what would trigger "directory not empty".


The lost+found directory.


Listing only the user defined types (with owners)

2024-05-02 Thread Durumdara
Hello!

I have a script which can change the table owners to the database owner.

I select the tables like this:

FOR r IN SELECT tablename FROM pg_tables WHERE (schemaname = 'public')
and (tableowner <> act_dbowner)
LOOP
...

For types I found pg_type, but this contains all types.

For example I have only one user defined type, like "T_TEST", but this
pg_type relation contains the basic data types,  other data types, from any
schema.

Do you have a working Query which lists the user defined types with the
owners?

Thank you for your help!

Best regards
dd


Re: Listing only the user defined types (with owners)

2024-05-02 Thread Kashif Zeeshan
Hi

You can find all user defined types with the following query.

CREATE TYPE bug_status AS ENUM ('new', 'open', 'closed');

SELECT typname
FROM pg_catalog.pg_type
  JOIN pg_catalog.pg_namespace
  ON pg_namespace.oid = pg_type.typnamespace
WHERE
typtype = 'e' and nspname NOT IN ('pg_catalog', 'information_schema');
typname
---
 bug_status

The values for typtype are as follows
typtype is b for a base type, c for a composite type (e.g., a table's row
type), d for a domain, e for an enum type, p for a pseudo-type, or r for a
range type. See also typrelid and typbasetype.


Regards
Kashif Zeeshan
Bitnine Global

On Thu, May 2, 2024 at 4:40 PM Durumdara  wrote:

> Hello!
>
> I have a script which can change the table owners to the database owner.
>
> I select the tables like this:
>
> FOR r IN SELECT tablename FROM pg_tables WHERE (schemaname = 'public')
> and (tableowner <> act_dbowner)
> LOOP
> ...
>
> For types I found pg_type, but this contains all types.
>
> For example I have only one user defined type, like "T_TEST", but this
> pg_type relation contains the basic data types,  other data types, from any
> schema.
>
> Do you have a working Query which lists the user defined types with the
> owners?
>
> Thank you for your help!
>
> Best regards
> dd
>
>
>


Re: Prevent users from executing pg_dump against tables

2024-05-02 Thread David G. Johnston
On Wednesday, May 1, 2024, RAJAMOHAN  wrote:

>
>  Main reason being I don't want the data to be copied from the database to
> their local machines.
>

You cannot stop it being copied to their local machine, you can only make
it difficult.  And really not that difficult.

Trust but verify - i.e., use something like pg_audit.

David J.


Re: Listing only the user defined types (with owners)

2024-05-02 Thread Thom Brown
On Thu, 2 May 2024 at 12:40, Durumdara  wrote:

> Hello!
>
> I have a script which can change the table owners to the database owner.
>
> I select the tables like this:
>
> FOR r IN SELECT tablename FROM pg_tables WHERE (schemaname = 'public')
> and (tableowner <> act_dbowner)
> LOOP
> ...
>
> For types I found pg_type, but this contains all types.
>
> For example I have only one user defined type, like "T_TEST", but this
> pg_type relation contains the basic data types,  other data types, from any
> schema.
>
> Do you have a working Query which lists the user defined types with the
> owners?
>
> Thank you for your help!
>

You can always cheat and copy what psql does when you tell it to list all
user types with extended output (\dt+):

postgres=# SET log_min_duration_statement = 0;
SET
postgres=# SET client_min_messages TO LOG;
LOG:  duration: 0.137 ms  statement: SET client_min_messages TO LOG;
SET
postgres=# \dT+
LOG:  duration: 2.901 ms  statement: SELECT n.nspname as "Schema",
  pg_catalog.format_type(t.oid, NULL) AS "Name",
  t.typname AS "Internal name",
  CASE WHEN t.typrelid != 0
  THEN CAST('tuple' AS pg_catalog.text)
WHEN t.typlen < 0
  THEN CAST('var' AS pg_catalog.text)
ELSE CAST(t.typlen AS pg_catalog.text)
  END AS "Size",
  pg_catalog.array_to_string(
  ARRAY(
  SELECT e.enumlabel
  FROM pg_catalog.pg_enum e
  WHERE e.enumtypid = t.oid
  ORDER BY e.enumsortorder
  ),
  E'\n'
  ) AS "Elements",
  pg_catalog.pg_get_userbyid(t.typowner) AS "Owner",
CASE WHEN pg_catalog.cardinality(t.typacl) = 0 THEN '(none)' ELSE
pg_catalog.array_to_string(t.typacl, E'\n') END AS "Access privileges",
pg_catalog.obj_description(t.oid, 'pg_type') as "Description"
FROM pg_catalog.pg_type t
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace
WHERE (t.typrelid = 0 OR (SELECT c.relkind = 'c' FROM pg_catalog.pg_class c
WHERE c.oid = t.typrelid))
  AND NOT EXISTS(SELECT 1 FROM pg_catalog.pg_type el WHERE el.oid =
t.typelem AND el.typarray = t.oid)
  AND n.nspname <> 'pg_catalog'
  AND n.nspname <> 'information_schema'
  AND pg_catalog.pg_type_is_visible(t.oid)
ORDER BY 1, 2;
   List of data types
 Schema | Name | Internal name | Size  | Elements |   Owner   | Access
privileges | Description
+--+---+---+--+---+---+-
 public | test | test  | tuple |  | thombrown |
  |
(1 row)

Regards

Thom


Re: Prevent users from executing pg_dump against tables

2024-05-02 Thread Ron Johnson
On Thu, May 2, 2024 at 1:47 AM RAJAMOHAN  wrote:

> Hello all,
>
> In our production db infrastructure, we have one read_only role which has
> read privileges against all tables in schema A.
>
> We are planning to grant this role to some developers for viewing the
> data, but also I want to limit the users from executing statements like
> copy or using pg_dump. Main reason being I don't want the data to be copied
> from the database to their local machines.
>
> I tried by implementing triggers, but was not able to figure out a way to
> restrict the pg_dump and allow only select statements.
>

> Is there a way to implement this? Please advise.
> 
>

If you can query a table, then you can save the query contents to your
local context.  That's a fundamental law of nature, since you gave them
read privs.

For example:
psql --host=SomeEC2Node $DB -Xc "SELECT * FROM read_only_table;" >
read_only_table.txt

That even works on Windows.


Re: Listing only the user defined types (with owners)

2024-05-02 Thread Tom Lane
Thom Brown  writes:
> On Thu, 2 May 2024 at 12:40, Durumdara  wrote:
>> Do you have a working Query which lists the user defined types with the
>> owners?

> You can always cheat and copy what psql does when you tell it to list all
> user types with extended output (\dt+):

If you want to look at what SQL psql issues for a \d-type command,
you don't even need to look at the server log.  Just start psql with
the -E (--echo-hidden) switch.

$ psql -E
psql (17devel)
Type "help" for help.

postgres=# \dT+
/ QUERY */
SELECT n.nspname as "Schema",
  pg_catalog.format_type(t.oid, NULL) AS "Name",
  t.typname AS "Internal name",
  CASE WHEN t.typrelid != 0
  THEN CAST('tuple' AS pg_catalog.text)
WHEN t.typlen < 0
  THEN CAST('var' AS pg_catalog.text)
ELSE CAST(t.typlen AS pg_catalog.text)
  END AS "Size",
... etc etc ...

regards, tom lane




Re: How to interpret 'depends on' errors in pg_restore?

2024-05-02 Thread Adrian Klaver

On 5/2/24 02:20, Fire Emerald wrote:
I didn't used pg_dump/restore until today and finally found my mistake 
which lead to the "problem" described below.


The output "depends on" comes from the -l (l as Lima) flag, what i 
wanted was the -1 (number one) flag, which stands for single transaction 
in pg_restore. As -l does not execute anything, nothing was logged in 
the postgres server log and none error was shown anywhere.


-l does indeed execute something per:

https://www.postgresql.org/docs/current/app-pgrestore.html

"
-l
--list

List the table of contents of the archive. The output of this 
operation can be used as input to the -L option. Note that if filtering 
switches such as -n or -t are used with -l, they will restrict the items 
listed.

"

As example:

pg_restore -l redmine41_14_032124.out


;
; Archive created at 2024-03-21 01:00:01 PDT
; dbname: redmine
; TOC Entries: 455
; Compression: -1
; Dump Version: 1.14-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 14.11 (Ubuntu 14.11-1.pgdg22.04+1)
; Dumped by pg_dump version: 14.11 (Ubuntu 14.11-1.pgdg22.04+1)
;
;
; Selected TOC Entries:
;
209; 1259 17070 TABLE public ar_internal_metadata redmine
210; 1259 17075 TABLE public attachments redmine
211; 1259 17088 SEQUENCE public attachments_id_seq redmine
4069; 0 0 SEQUENCE OWNED BY public attachments_id_seq redmine
...


Generally you want to redirect that to a file with -f 
with the goal of using it with:

"
-L list-file
--use-list=list-file

Restore only those archive elements that are listed in list-file, 
and restore them in the order they appear in the file. Note that if 
filtering switches such as -n or -t are used with -L, they will further 
restrict the items restored.


list-file is normally created by editing the output of a previous 
-l operation. Lines can be moved or removed, and can also be commented 
out by placing a semicolon (;) at the start of the line. See below for 
examples.

"

You instead redirected the output to the target database and that led to 
your errors.





Both chars looked so identical in my editors/shells that i thought i 
used -1, in fact using -l.


It's always the tiny obvious thing, which we do not see.

Best regards,
Chris

Am 28. März 2024 16:57:04 schrieb Fire Emerald :


Am 28. März 2024 15:00:06 schrieb Tom Lane :


Fire Emerald  writes:
Then i did a pg_restore -d target --verbose -Fc file.dump and saw in 
the

output this:



5145 0 730750 TABLE subpartitions backends_y2024w03 userA
;        depends on: 237
 and so on ...


That is not an error, it's just verbose display of one of the items
in the dump.


Well, I know it's not an error, but it's everything i got. There was 
no error shown. The command completed, but without anything imported.



Nothing was restored.


You would need to show us the actual errors.  (Suggestion:
leave off --verbose, it's just clutter.)  A guess though is
that the import failed because of foreign key constraints.
--data-only mode is not good at ordering the table loads to
ensure that FK constraints are satisfied on-the-fly.

  regards, tom lane


As i said, the same import but with INSERT INTOs worked without any 
issues. So no, there are no FK constraints failing.


*But* the target and source table had partitioned tables attached, 
using ATTACH PARTITION.


The schema was like:
db1 schema1 public table1 (links to the listed below)
db1 schema1 subpartitions backends_y2024w03
db1 schema1 subpartitions backends_y2024w04
db1 schema1 subpartitions backends_y2024w05

The partitioning must be the problem somehow.




--
Adrian Klaver
adrian.kla...@aklaver.com





Restore of a reference database kills the auto analyze processing.

2024-05-02 Thread HORDER Philip
Running Postgres 15.3 with PostGIS 3.3
On Windows 10 (yes, I know)

It's a single node db with no replication, topping out at about 200GB.

We have a schema (A) in the default 'postgres' maintenance database, which our 
software services connect to, with one set of users (SU)

We have another database, let's call it LFM, which contains reference data for 
some COTS software.  I don't know what's in it, we just get given updates for 
it in pg_backup binary files, about 2MB each.
This is accessed by a different postgres user (LFU) supplied to the COTS tool.

To apply an update, we:
  stop the applications that use LFM,
  set the user (LFU) to NOLOGIN
  kill any left-over connections: select 
pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE 
pg_stat_activity.datname = 'lfm' and usename = 'lfu';
  drop the existing reference database using the dropDb utility.
  reload the new file using pg_restore and the postgres super user.
  set the user (LFU) to LOGIN

Other services connecting to the default db, with SU users should keep running 
with no dropouts.

This works, some of the time.
If I repeat the update process, somewhere around run #4 the auto analyzer stops 
working, and only analyzes tables in the new db at the point of reload, then 
shuts off again.
All vacuum and analyze operations on the 'postgres' database just stops, even 
though there is still data processing into it.

With log_autovacuum_min_duration = 0, we are logging all vacuum & analyze 
operations, so we can see when the entries shut off in the Postgres log files, 
e.g.
2024-05-02 14:52:01.597 GMT [6896]: [23-1] db=,user=,app=,client= LOG:  
automatic analyze of table "lfm.pg_catalog.pg_trigger"

The only way I can find of getting the analyzer back is to restart Postgres.

We've narrowed the cause down to the pg_restore, but have no idea where to go 
from here.
Can anyone help stand the anaylzer back up please?

Most configs are left at default, (apart from memory settings) but we currently 
have
autovacuum_max_workers = 10
log_autovacuum_min_duration = 0

thanks,

Phil Horder
Database Mechanic

Thales Land & Air Systems



The information contained in this e-mail is confidential. It is intended only 
for the stated addressee(s) and access to it by any other person is 
unauthorised. If you are not an addressee, you must not disclose, copy, 
circulate or in any other way use or rely on the information contained in this 
e-mail. Such unauthorised use may be unlawful. If you have received this e-mail 
in error, please inform the originator immediately and delete it and all copies 
from your system.

Thales UK Limited. A company registered in England and Wales. Registered 
Office: 350 Longwater Avenue, Green Park, Reading, Berks RG2 6GF. Registered 
Number: 868273

Please consider the environment before printing a hard copy of this e-mail.


Re: Restore of a reference database kills the auto analyze processing.

2024-05-02 Thread Adrian Klaver




On 5/2/24 8:52 AM, HORDER Philip wrote:

Running Postgres 15.3 with PostGIS 3.3

On Windows 10 (yes, I know)

It’s a single node db with no replication, topping out at about 200GB.

We have a schema (A) in the default 'postgres' maintenance database, 
which our software services connect to, with one set of users (SU)


We have another database, let’s call it LFM, which contains reference 
data for some COTS software.  I don't know what's in it, we just get 
given updates for it in pg_backup binary files, about 2MB each.


Do you mean pg_basebackup, pg_dump or something else?


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Restore of a reference database kills the auto analyze processing.

2024-05-02 Thread Adrian Klaver

On 5/2/24 08:52, HORDER Philip wrote:

Running Postgres 15.3 with PostGIS 3.3

On Windows 10 (yes, I know)

It’s a single node db with no replication, topping out at about 200GB.

We have a schema (A) in the default 'postgres' maintenance database, 
which our software services connect to, with one set of users (SU)


This above is probably not a good idea, The 'postgres' database is 
generally taken to be a throw away database for establishing an initial 
connection. Many utilities/tools use it for that purpose, having your 
data in it exposes that data.




This works, some of the time.

If I repeat the update process, somewhere around run #4 the auto 
analyzer stops working, and only analyzes tables in the new db at the 
point of reload, then shuts off again.


All vacuum and analyze operations on the 'postgres' database just stops, 
even though there is still data processing into it.


Is there enough data processing?

Autovacuum has thresholds for turning on, are you sure those thresholds 
are just not being met?




With log_autovacuum_min_duration = 0, we are logging all vacuum & 
analyze operations, so we can see when the entries shut off in the 
Postgres log files, e.g.


2024-05-02 14:52:01.597 GMT [6896]: [23-1] db=,user=,app=,client= LOG:  
automatic analyze of table "lfm.pg_catalog.pg_trigger"


Except the above shows it working.

What is the evidence it is not?



The only way I can find of getting the analyzer back is to restart Postgres.


Did you wait to see if activity after the pg_restore crossed the 
autovacuum thresholds?




--
Adrian Klaver
adrian.kla...@aklaver.com





Ora2pg Delta Migration: Oracle to PostgreSQL

2024-05-02 Thread Amit Sharma
Hello,

Has anyone tried delta/incremental data migration for Oracle to PostgreSQL
using Ora2pg? Or what are the best options to run delta migration for
Oracle to PostgreSQL?

Thanks
Amit


Re: Preallocation changes in Postgresql 16

2024-05-02 Thread Riku Iki
I did the testing and confirmed that this was the issue.

I run following query:

 create table t as select '1234567890' from generate_series(1, 10);

I commented if (numblocks > 8) codeblock, and see the following results
from "compsize /dbdir/" command.


Before my changes:

Processed 1381 files, 90007 regular extents (90010 refs), 15 inline.
Type   Perc Disk Usage   Uncompressed Referenced
TOTAL   97%   41G  42G  42G
none   100%   41G  41G  41G
zstd14%  157M 1.0G 1.0G
prealloc   100%   16M  16M  16M



After the changes:

Processed 1381 files, 347328 regular extents (347331 refs), 15 inline.
Type   Perc Disk Usage   Uncompressed Referenced
TOTAL3%  1.4G  42G  42G
none   100%   80K  80K  80K
zstd 3%  1.4G  42G  42G

It is clearly visible that files created with fallocate are not compressed,
and disk usage is much larger.
I am wondering if there is a way to have some feature request to have this
parameter user configurable..

On Fri, Apr 26, 2024 at 4:15 PM Riku Iki  wrote:

> Thank you, I have such a system. I think my task would be to compile PG
> from sources(need to learn this), and see how it works with and without
> that code block.
>
> On Thu, Apr 25, 2024 at 2:25 PM Thomas Munro 
> wrote:
>
>> On Fri, Apr 26, 2024 at 4:37 AM Riku Iki  wrote:
>> > I am wondering if there were preallocation related changes in PG16, and
>> if it is possible to disable preallocation in PostgreSQL 16?
>>
>> I have no opinion on the btrfs details, but I was wondering if someone
>> might show up with a system that doesn't like that change.  Here is a
>> magic 8, tuned on "some filesystems":
>>
>> /*
>>  * If available and useful, use posix_fallocate() (via
>>  * FileFallocate()) to extend the relation. That's often more
>>  * efficient than using write(), as it commonly won't cause the
>> kernel
>>  * to allocate page cache space for the extended pages.
>>  *
>>  * However, we don't use FileFallocate() for small extensions, as
>> it
>>  * defeats delayed allocation on some filesystems. Not clear where
>>  * that decision should be made though? For now just use a cutoff
>> of
>>  * 8, anything between 4 and 8 worked OK in some local testing.
>>  */
>> if (numblocks > 8)
>>
>> I wonder if it wants to be a GUC.
>>
>


Re: Ora2pg Delta Migration: Oracle to PostgreSQL

2024-05-02 Thread Ron Johnson
On Thu, May 2, 2024 at 8:28 PM Amit Sharma  wrote:

> Hello,
>
> Has anyone tried delta/incremental data migration for Oracle to PostgreSQL
> using Ora2pg? Or what are the best options to run delta migration for
> Oracle to PostgreSQL?
>

What do the ora2pg docs say about whether or not that feature is
implemented?  (It wasn't when I last used it in 2022.)


Re: Table data migration from single server to Flexi server

2024-05-02 Thread Justin Clift

On 2024-05-02 13:24, Bagesh kamar singh wrote:


Recently we migrated our postgreSQL single server to flexi server.


Hmmm, what's "Flexi server"?

Doing a quick online search just now isn't showing things that seem
to be PostgreSQL related.

Regards and best wishes,

Justin Clift