Re: running ANALYZE results in => duplicate key value violates unique constraint "pg_statistic_relid_att_inh_index"

2023-09-08 Thread Torsten Krah
Am Mittwoch, dem 06.09.2023 um 20:42 +0200 schrieb Imre Samu:
> Maybe you have to increase the "*--stop-timeout" value 

That is totally unrelated in my case, it is an anonymous volume anyway
which gets created on start and deleted afterwards.

Torsten





Re: Will PostgreSQL 16 supports native transparent data encryption ?

2023-09-08 Thread Matthias Apitz
El día jueves, septiembre 07, 2023 a las 12:33:06 -0400, Stephen Frost escribió:

> * Matthias Apitz (g...@unixarea.de) wrote:
> > > 
> > > There's ongoing work happening for TDE support and we'd love to hear
> > > from folks who would like to see it included. You can expect an updated
> > > patch set for the September commitfest. Getting more folks to test it
> > > and use it and review it would certainly help move it forward.
> > 
> > We have strong interest in TDE support and I would be happy to
> > test this with our Library Management System.
> 
> Great, glad to hear that.  Note that this is still very much a
> development effort and so some familiarity with how to build PostgreSQL
> from source, apply patches, and then run the resulting binaries is
> necessary to test.  If you're still interested, there's patches that
> have been posted, just let me know.

We ported our LMS from Sybase and Oracle to PostgreSQL some years ago
and compiled PostgreSQL on SuSE Linux always from source, IIRC starting
with version 11.0. We have even own modifications to improve the logging
of the ESQL/C layer of PostgreSQL , and for fixing a smaller bug.

I personally count with 30++ UNIX experience in FreeBSD, SVR4, Solaris
SPARC, AIX, HP-UX and Linux.

Said that, I don't see any problem applying the patches and compile the
binaries.

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: Huge input lookup exception when trying to create the index for XML data type column in postgreSQL

2023-09-08 Thread Dominique Devienne
On Thu, Sep 7, 2023 at 10:22 PM Tom Lane  wrote:

> Erik Wienhold  writes:
> > Looks like "Huge input lookup" as reported in [1] (also from Sai) and
> that
> > error is from libxml.
>
> Ah, thanks for the pointer.  It looks like for the DOCUMENT case,
> we could maybe relax this restriction by passing the XML_PARSE_HUGE
> option to xmlCtxtReadDoc().  However, there are things to worry about:
>

Just a remark from the sidelines, from someone having done a fair bit of
XML in years past.

That XPath is simple, and a streaming parser (SAX or StAX) could handle it.
While that
XML_PARSE_HUGE option probably applies to a DOM parser. So is there a
work-around
to somehow force using a streaming parser instead of one that must produce
the whole Document,
just so a few elements are picked out of it? FWIW. --DD


Re: Huge input lookup exception when trying to create the index for XML data type column in postgreSQL

2023-09-08 Thread Dominique Devienne
On Fri, Sep 8, 2023 at 11:39 AM Dominique Devienne 
wrote:

> On Thu, Sep 7, 2023 at 10:22 PM Tom Lane  wrote:
>
>> Erik Wienhold  writes:
>> > Looks like "Huge input lookup" as reported in [1] (also from Sai) and
>> that
>> > error is from libxml.
>>
>> Ah, thanks for the pointer.  It looks like for the DOCUMENT case,
>> we could maybe relax this restriction by passing the XML_PARSE_HUGE
>> option to xmlCtxtReadDoc().  However, there are things to worry about:
>>
>
> Just a remark from the sidelines, from someone having done a fair bit of
> XML in years past.
>
> That XPath is simple, and a streaming parser (SAX or StAX) could handle
> it. While that
> XML_PARSE_HUGE option probably applies to a DOM parser. So is there a
> work-around
> to somehow force using a streaming parser instead of one that must produce
> the whole Document,
> just so a few elements are picked out of it? FWIW. --DD
>

If push comes to shove, the streaming-based extraction can be done outside
the DB, stored in a new column
or table, and index that instead. This is in fact exactly the approach I
took on one server handling XML I wrote.

To be honest, in my case, the XMLs were never large, so I used rapidxml
which is also a DOM parser,
but the same principle applies though, i.e. extract the data from the XML
outside the DB using
SAX (push) / StAX (pull), to avoid having a (too) large document in memory
at any time (client or server side). --DD


Re: Huge input lookup exception when trying to create the index for XML data type column in postgreSQL

2023-09-08 Thread Rob Sargent

On 9/7/23 23:51, Sai Teja wrote:

Thank you so much for all your responses.

I just tried with Hash, GIN etc

But it didn't worked. And I think it is because of "Xpath" expression 
which I used in the index create command.


But is there any alternative way to change this Xpath? Since I need to 
parse the XML as there is no other option. I need the other ways to 
create the index .


May be if there are any parameters to change like xmloption etc it 
would help us to resolve the issue.


Thanks,
Sai




What is a typical value returned by your xpath statement? (XPATH 
(‘directory/access/mode/@Identifier’, content))






Even if I have a lot of free space PgSQL returns "ERROR: could not extend file - No space left on device"

2023-09-08 Thread Nature Conservation Geovista Space
Dear Pg-users,

I am coming back to Postgres/PostGIS after a few years. I am dealing with a
big database with a lot of geometries and too many vertices.

After hours running a query to Subdivide, I get this Postgres error


*2023-09-08 02:11:23.745 BST [328594] postgres@database ERROR:  could not
extend file "base/16388/7985375.1020": No space left on device 2023-09-08
02:11:23.745 BST [328594] postgres@database HINT:  Check free disk space.*

2023-09-08 02:11:23.745 BST [328594] postgres@database  STATEMENT:  CREATE
TABLE _gaul_administrative_subdivided100 AS (
SELECT *, st_subdivide(geom,100) AS geom_subdivided100
FROM gaul_administrative
);

*2023-09-08 02:15:38.251 BST [313729] LOG:  checkpoint complete: wrote
81956 buffers (1.6%); 0 WAL file(s) added, 0 removed, 608 recycled;
write=269.414 s, sync=0.001 s, total=269.634 s; sync files=1, longest=0.001
s, average=0.001 s; distance=9962549 kB, estimate=9980351 kB;
lsn=291/BF46ABE8, redo lsn=291/A0FB7D98*

It seems that it is not a problem of space.

Command *df -h* returns:

Filesystem  Size  Used Avail Use% Mounted on
tmpfs   6.3G  1.1M  6.3G   1% /run
/dev/sda1.3T  164G  1.1T  14% /
tmpfs32G  3.2M   32G   1% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs   6.3G  4.0K  6.3G   1% /run/user/1000

Command *df -ih* returns:

Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs7.9M   724  7.9M1% /run
/dev/sda  80M  179K   80M1% /
tmpfs7.9M 4  7.9M1% /dev/shm
tmpfs7.9M 3  7.9M1% /run/lock
tmpfs1.6M28  1.6M1% /run/user/1000

I suppose it is an issue with temporary table, here my present
configuration in *postgresql.conf*

#temp_tablespaces = ''  # a list of tablespace names,
'' uses
# only default tablespace

#temp_file_limit = -1   # limits per-process temp file
space
# in kilobytes, or -1 for no limit

What do you suggest?

cheers,

Enzopolo


Is data import from Oracle table to Postgres table in Ora2pg consecutive or random?

2023-09-08 Thread duc hiep ha
Hello friends,

I want to migrate a large Oracle table to PostgreSQL, which is
approximately 200GB in size and includes a LOB segment. This table has a
primary key column named "ID," which increments by one unit (similar to a
sequence). During the migration of this large table, sometimes the process
terminates prematurely (due to network issues, etc.). As a result, I have
to restart the import process either from the beginning or by using a WHERE
clause in the ora2pg configuration file to import the remaining data from
where it was previously terminated.

However, I've noticed that the import process in ora2pg doesn't seem to
follow a consecutive order, starting from a small ID number and progressing
to a larger ID number. Instead, it appears to import data randomly from
various rows of the table. This makes it impossible to continue importing
data from the point where it was terminated using a WHERE clause in ora2pg.

My questions are:

- Is it correct that ora2pg imports data from the Oracle table to the
PostgreSQL table randomly and not sequentially from a smaller ID to a
larger ID?
- Or can we add additional parameters to the ora2pg.conf file to control
this process and ensure that the data is imported sequentially, following
the primary key from smallest to largest?
Thank you,
Hans


Re: ERROR: stack depth limit exceeded

2023-09-08 Thread Tom Lane
gzh   writes:
> In the Release Notes for PostgreSQL 12.14, we saw the following change:
> https://www.postgresql.org/docs/release/12.14/

>> Add recursion and looping defenses in subquery pullup (Tom Lane)
>> A contrived query can result in deep recursion and unreasonable amounts of 
>> time spent trying to flatten subqueries. A proper fix for that seems unduly 
>> invasive for a back-patch, but we can at least add stack depth checks and an 
>> interrupt check to allow the query to be cancelled.


> Our understanding is that this change will cause some complex SQL statements 
> that were previously not reporting errors to report errors in the new 
> version. 

The key word there is "contrived".  You are not going to hit this limit
without intentionally trying.  The example that led to adding this check
was a synthetic query with 1 UNION ALL branches:

https://www.postgresql.org/message-id/flat/703c09a2-08f3-d2ec-b33d-dbecd62428b8%40postgrespro.ru

Also notice that the query misbehaved before this patch, too, by consuming
excessive RAM.

regards, tom lane




Re: Even if I have a lot of free space PgSQL returns "ERROR: could not extend file - No space left on device"

2023-09-08 Thread Adrian Klaver

On 9/8/23 04:25, Nature Conservation Geovista Space wrote:

Dear Pg-users,

I am coming back to Postgres/PostGIS after a few years. I am dealing 
with a big database with a lot of geometries and too many vertices.


Postgres version?

PostGIS version?

OS and version?

Is this a physical machine or a virtual one?

Hosted or local?

'Hardware' specifications?



After hours running a query to Subdivide, I get this Postgres error


How does CREATE TABLE enter into the query?

What is the query?



*2023-09-08 02:11:23.745 BST [328594] postgres@database ERROR: 
could not extend file "base/16388/7985375.1020": No space left on device

2023-09-08 02:11:23.745 BST [328594] postgres@database HINT:  Check
free disk space.*

2023-09-08 02:11:23.745 BST [328594] postgres@database  STATEMENT: 
CREATE TABLE _gaul_administrative_subdivided100 AS (

     SELECT *, st_subdivide(geom,100) AS geom_subdivided100
     FROM gaul_administrative
     );

*2023-09-08 02:15:38.251 BST [313729] LOG:  checkpoint complete:
wrote 81956 buffers (1.6%); 0 WAL file(s) added, 0 removed, 608
recycled; write=269.414 s, sync=0.001 s, total=269.634 s; sync
files=1, longest=0.001 s, average=0.001 s; distance=9962549 kB,
estimate=9980351 kB; lsn=291/BF46ABE8, redo lsn=291/A0FB7D98*

It seems that it is not a problem of space.

Command *df -h* returns:

Filesystem  Size  Used Avail Use% Mounted on
tmpfs   6.3G  1.1M  6.3G   1% /run
/dev/sda    1.3T  164G  1.1T  14% /
tmpfs    32G  3.2M   32G   1% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs   6.3G  4.0K  6.3G   1% /run/user/1000

Command *df -ih* returns:

Filesystem Inodes IUsed IFree IUse% Mounted on
tmpfs 7.9M 724 7.9M 1% /run
/dev/sda 80M 179K 80M 1% /
tmpfs 7.9M 4 7.9M 1% /dev/shm
tmpfs 7.9M 3 7.9M 1% /run/lock
tmpfs 1.6M 28 1.6M 1% /run/user/1000

I suppose it is an issue with temporary table, here my present 
configuration in *postgresql.conf*


#temp_tablespaces = '' # a list of tablespace names, '' uses
# only default tablespace

#temp_file_limit = -1 # limits per-process temp file space
# in kilobytes, or -1 for no limit

What do you suggest?

cheers,

Enzopolo



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pg_restore unexpected end of file

2023-09-08 Thread Adrian Klaver

On 9/7/23 23:29, Les wrote:

I'm trying to migrate a database from version 11 to version 15.

I have created a dump file on v11:

pg_dump --dbname=not_telling -Fc --no-owner > dump


--no-owner is ignored for non-text outputs. It is handled on the 
pg_restore end.




Then I was trying to import it on v15:

pg_restore -v --dbname=not_telling --no-user /restore/dump


I'm guessing it was actually --no-owner not --no-user?


pg_restore: error: could not read from input file: end of file


Are:

> dump

and

/restore/dump

pointing at the same thing?

Was the dump file transferred from one location to another?

If so how?


Is the custom dump format of v11 compatible with v15?


Yes it is.

https://www.postgresql.org/docs/current/app-pgdump.html

"Because pg_dump is used to transfer data to newer versions of 
PostgreSQL, the output of pg_dump can be expected to load into 
PostgreSQL server versions newer than pg_dump's version. pg_dump can 
also dump from PostgreSQL servers older than its own version. 
(Currently, servers back to version 9.2 are supported.) However, pg_dump 
cannot dump from PostgreSQL servers newer than its own major version; it 
will refuse to even try, rather than risk making an invalid dump. Also, 
it is not guaranteed that pg_dump's output can be loaded into a server 
of an older major version — not even if the dump was taken from a server 
of that version. Loading a dump file into an older server may require 
manual editing of the dump file to remove syntax not understood by the 
older server. Use of the --quote-all-identifiers option is recommended 
in cross-version cases, as it can prevent problems arising from varying 
reserved-word lists in different PostgreSQL versions."




Thank you,

    Laszlo



--
Adrian Klaver
adrian.kla...@aklaver.com





pgsql --echo-errors --quiet and setval

2023-09-08 Thread Les
  Dear fellow PostgreSQL users,

Today I caught this with postgresql v15, while restoring a database with
psql  --echo-errors --quiet. (The dump was made using pg_dump -Fp).

It logged lots of these messages:

 setval

   1001
(1 row)

In other words, it logs **some** things that are not errors, even though
--quiet was specified. Is this the expected behaviour? I would argue that
with --quiet --echo-errors, only errors should be logged, and setting the
value of a sequence is not an error.

Thank you,

Laszlo


Re: pgsql --echo-errors --quiet and setval

2023-09-08 Thread Adrian Klaver

On 9/8/23 12:36, Les wrote:


   Dear fellow PostgreSQL users,

Today I caught this with postgresql v15, while restoring a database with 
psql  --echo-errors --quiet. (The dump was made using pg_dump -Fp).


It logged lots of these messages:

  setval

    1001
(1 row)

In other words, it logs **some** things that are not errors, even though 
--quiet was specified. Is this the expected behaviour? I would argue 
that with --quiet --echo-errors, only errors should be logged, and 
setting the value of a sequence is not an error.


No but SELECT pg_catalog.setval(...) in the dump file is a function that 
has return value and that is what you are seeing.




Thank you,

     Laszlo



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pgsql --echo-errors --quiet and setval

2023-09-08 Thread David G. Johnston
On Fri, Sep 8, 2023 at 1:41 PM Les  wrote:

>
>   Dear fellow PostgreSQL users,
>
> Today I caught this with postgresql v15, while restoring a database with
> psql  --echo-errors --quiet. (The dump was made using pg_dump -Fp).
>
> It logged lots of these messages:
>
>  setval
> 
>1001
> (1 row)
>
> In other words, it logs **some** things that are not errors, even though
> --quiet was specified. Is this the expected behaviour? I would argue that
> with --quiet --echo-errors, only errors should be logged, and setting the
> value of a sequence is not an error.
>

The output of SELECT queries cannot be quieted, only redirected.

David J.