Re: psql \copy hanging

2019-10-07 Thread Arnaud L.

Le 04/10/2019 à 19:08, Adrian Klaver a écrit :

On 10/4/19 12:19 AM, Arnaud L. wrote:
OK I can do that. I thought I nailed it down to this line because it 
started failing when this line was ~5th in the script, and it kept 
failing on that very same line after I moved it at the very end of the 
script (that's where it is now).


Which tends to point to it as the problem. The question is whether it
exhibits that behavior on its own or only when in combination with the
other commands.


Yes. It ran fine this last night. I had moved the line back to its 
original place, so now everything is exactly like it was before it 
started showing this behaviour.


So, still apparently random...


As a side note, COPY (...) TO STDOUT \g 'somefile' does not work in a 
script file (i.e. it does not work if the command is passed in a file 
via the -f argument).
The command runs fine, no error is raised either by the client or the 
server, but no file is written.


Yeah not sure how that is supposed to work:


[...]

production_(postgres)# \copy (select * from cell_per) TO STDOUT \g
'cell.txt'
ERROR:  syntax error at or near "\"
LINE 1: COPY  ( select * from cell_per ) TO STDOUT \g 'cell.txt'



This works with real SQL commands, so it should be "COPY" here, not "\copy".

Regards
--
Arnaud




RowDescription message

2019-10-07 Thread Tatsuo Ishii
According to the manualof RowDescription message
https://www.postgresql.org/docs/12/protocol-message-formats.html

  Specifies the number of fields in a row (can be zero).

Does 0 fields could actually happen?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp




Re: RowDescription message

2019-10-07 Thread Andres Freund
Hi,

On 2019-10-07 17:00:24 +0900, Tatsuo Ishii wrote:
> According to the manualof RowDescription message
> https://www.postgresql.org/docs/12/protocol-message-formats.html
> 
> Specifies the number of fields in a row (can be zero).
> 
> Does 0 fields could actually happen?

Yes, e.g.:

SELECT;




Re: COPY command returns "ERROR: invalid XML content"

2019-10-07 Thread Tomas Vondra

On Sun, Oct 06, 2019 at 08:45:40PM -0700, Konstantin Izmailov wrote:

Hi,
I'm using libpq (v10) to import lots of xml files into a PG10 table. I
noticed if number of records imported exceeds 2100 then the following error
is returned:
ERROR:  invalid XML content
DETAIL:  line 1: Couldn't find end of Start Tag
timeBasedFileNamingAndTriggerin line 1
logFile.%d{-MM-dd}.%i.html

My guess is this is an issue/limitation in libxml2, which we use to
parse and process XML. What libxml2 version you have installed? Can you
share an example of a XML document to reproduce the issue?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Install postgres on rhel 7

2019-10-07 Thread Tomas Vondra

On Mon, Oct 07, 2019 at 01:00:32AM -0400, Mageshwaran Janarthanam wrote:

Hi Team,

I have question on the postgres install process. Tried to read the archives
but couldn't find it.

I wanted to install postgres server via rpm and wanted to store the
binaries under non default location like /pgbin.



The rpm package (at least not those available at yum.postgresql.org) are
not relocatable:

 rpm -qpi postgresql12-12.0-1PGDG.rhel7.x86_64.rpm  | grep Relocations
 Relocations : (not relocatable)

So unfortunately, you'll have to build your own packages, or something
like that.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Pg11 -- MultiXactId xxxx has not been created yet -- apparent wraparound

2019-10-07 Thread Moreno Andreo

Il 04/10/19 21:14, Alvaro Herrera ha scritto:

On 2019-Oct-04, Moreno Andreo wrote:


Il 04/10/19 18:28, Alvaro Herrera ha scritto:

I wonder if it would work to just clear that multixact with
SELECT ... WHERE ctid='(3160,31)' FOR UPDATE

select ...what? :-) Sorry but it's totally beyond my knowledge and my
control after resolving the issue i'll surely go and search docs to
understand what we've done

This should do it:

SELECT * FROM the_broken_table WHERE 

But of course I make no promise of it working or even having any effect
at all ...

Unfortunately, it didn't work :(

db0=# select * from failing_table where ctid='(3160,31)' for update;
ERROR:  MultiXactId 12800 has not been created yet -- apparent wraparound

Since the probability we are into corruption is very high, what if I 
\copy all the table but the failing row(s) to an external file, drop and 
recreate the table, and then \copy clean data back inside?


Thanks
Moreno.-





Re: RowDescription message

2019-10-07 Thread Tatsuo Ishii
>> According to the manualof RowDescription message
>> https://www.postgresql.org/docs/12/protocol-message-formats.html
>> 
>>Specifies the number of fields in a row (can be zero).
>> 
>> Does 0 fields could actually happen?
> 
> Yes, e.g.:
> 
> SELECT;

Thanks. Is it a valid SQL statement according to the standard?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp




Re: RowDescription message

2019-10-07 Thread Tom Lane
Tatsuo Ishii  writes:
>>> According to the manualof RowDescription message
>>> https://www.postgresql.org/docs/12/protocol-message-formats.html
>>> Specifies the number of fields in a row (can be zero).
>>> Does 0 fields could actually happen?

>> Yes, e.g.:
>> SELECT;

> Thanks. Is it a valid SQL statement according to the standard?

See select.sgml:

  
   Empty SELECT Lists

   
The list of output expressions after SELECT can be
empty, producing a zero-column result table.
This is not valid syntax according to the SQL standard.
PostgreSQL allows it to be consistent with
allowing zero-column tables.
However, an empty list is not allowed when DISTINCT is 
used.
   
  


regards, tom lane




Re: psql \copy hanging

2019-10-07 Thread Arnaud L.

Le 07/10/2019 à 16:36, Adrian Klaver a écrit :

So you are saying that you have not run the problematic line by itself?


It depends what you mean by that.
I've run this line by itself many times. Everytime the script has failed 
in fact.
But until today I had not splitted the batch script to call two separate 
SQL scripts with one containing only the problematic line, no.


I've changed it this morning, so we'll see how it goes now.

Regards
--
Arnaud




Re: psql \copy hanging

2019-10-07 Thread Adrian Klaver

On 10/7/19 12:41 AM, Arnaud L. wrote:

Le 04/10/2019 à 19:08, Adrian Klaver a écrit :

On 10/4/19 12:19 AM, Arnaud L. wrote:
OK I can do that. I thought I nailed it down to this line because it 
started failing when this line was ~5th in the script, and it kept 
failing on that very same line after I moved it at the very end of 
the script (that's where it is now).


Which tends to point to it as the problem. The question is whether it
exhibits that behavior on its own or only when in combination with the
other commands.


Yes. It ran fine this last night. I had moved the line back to its 
original place, so now everything is exactly like it was before it 
started showing this behaviour.


So you are saying that you have not run the problematic line by itself?



So, still apparently random...

Yeah not sure how that is supposed to work:


[...]

production_(postgres)# \copy (select * from cell_per) TO STDOUT \g
'cell.txt'
ERROR:  syntax error at or near "\"
LINE 1: COPY  ( select * from cell_per ) TO STDOUT \g 'cell.txt'




This works with real SQL commands, so it should be "COPY" here, not 
"\copy".


I was not paying attention, thanks for the heads up.



Regards
--
Arnaud




--
Adrian Klaver
adrian.kla...@aklaver.com




PG 12 not yet for mac

2019-10-07 Thread Ravi Krishna
https://postgresapp.com/downloads.html

The link which says PG 12 is actually PG 11.




Re: PG 12 not yet for mac

2019-10-07 Thread Tomas Vondra

On Mon, Oct 07, 2019 at 11:11:43AM -0400, Ravi Krishna wrote:

https://postgresapp.com/downloads.html

The link which says PG 12 is actually PG 11.



Not sure if the link is correct or not (it seems to point to
Postgres-2.3-12.dmg, which seems like it might be v12). But more
imporantly, that's not a project/page managed by the PostgreSQL
community, you need to report the issues to the authors (most likely
through github issues).

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: RowDescription message

2019-10-07 Thread Wim Bertels
Or
CREATE TABLE t();

SELECT *
FROM t;

Van: Tatsuo Ishii 
Verzonden: maandag 7 oktober 2019 10:00
Aan: pgsql-gene...@postgresql.org 
Onderwerp: RowDescription message

According to the manualof RowDescription message
https://www.postgresql.org/docs/12/protocol-message-formats.html

  Specifies the number of fields in a row (can be zero).

Does 0 fields could actually happen?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp




pgcrypto question

2019-10-07 Thread Erik Aronesty
Currently, it is my understanding that the pgcrypto library requires
the user to send a password or private key up to the server for
decryption.

Is there a notion of a client-side encrypt/decrypt plugin when doing a
postgres query?

For example, a user could query postgres, get back data of type
"encrypted", and a "libpq" plugin could decode/decrypt those columns
that are of data type "encrypted" in a manner transparent to the
user of the client

Obviously I could write this by intercepting the relevant libpq calls
using LD_PRELOAD or Microsoft's "Detours" ... but it there a better
way to do that?

- Erik




Re: PMChildFlags array

2019-10-07 Thread Tom Lane
bhargav kamineni  writes:
>> So ... how many is "a bunch"?
> more than 85

Hm.  That doesn't seem like it'd be enough to trigger the problem;
you'd need about max_connections excess connections (that are shortly
going to be rejected) to run into this problem, and you said you
had max_connections = 500.  Maybe several different clients were all
doing this at once?

But anyway, AFAICS there is only one code path that could lead to the
reported error message, so one way or another you got there.  I've
pushed a fix for this, which will be in next month's releases.

> below errors observed after crash in postgres logfile :

> ERROR:  xlog flush request  is not satisfied for couple of tables , we have
> initiated the vacuum full on those tables and the error went off after that.
> ERROR:  right sibling's left-link doesn't match: block 273660 links to
> 273500 instead of expected 273661 in index -- observed this error while
> doing vacuum freeze on databsase , we have dropped this index and created a
> new one

That seems unrelated.  A postmaster crash shouldn't have any
data-corruption consequences, since it never touches any
relation files directly.

regards, tom lane




Re: pgcrypto question

2019-10-07 Thread Tomas Vondra

On Mon, Oct 07, 2019 at 12:05:16PM -0400, Erik Aronesty wrote:

Currently, it is my understanding that the pgcrypto library requires
the user to send a password or private key up to the server for
decryption.



Correct. In the naive case the key is included in each SQL query, which
does have various issues. Bruce Momjian has a nice extension that allows
you to fix that by loading the key into backend memory:

 http://momjian.us/download/pgcryptokey/


Is there a notion of a client-side encrypt/decrypt plugin when doing a
postgres query?

For example, a user could query postgres, get back data of type
"encrypted", and a "libpq" plugin could decode/decrypt those columns
that are of data type "encrypted" in a manner transparent to the
user of the client

Obviously I could write this by intercepting the relevant libpq calls
using LD_PRELOAD or Microsoft's "Detours" ... but it there a better
way to do that?



AFAIk that's usually done at the application level, i.e. the application
is sending/receiving encrypted data, and the database simply sees bytea
columns. I'm not aware of a driver doing that transparently, but it
seems like an interesting idea - I wonder if it could be done e.g. in
psycopg as an extension, or something like that.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Declarative Range Partitioning Postgres 11

2019-10-07 Thread Shatamjeev Dewan
Hi,

I am trying to create a table in postgres 11 with timestamp column as  a 
partition key using  PARTITION BY RANGE (create_dtt). The table definition has 
also an id column which is a primary key.

ERROR:  insufficient columns in PRIMARY KEY constraint definition
DETAIL:  PRIMARY KEY constraint on table "audit_p" lacks column "create_dtt" 
which is part of the partition key.

I don't want partition key column   : create_dtt  to part of composite primary 
key.  Is there any way I can create range partition on date column without 
including as part of primary key .

Please advise.

Thanks,
Shatamjeev



Re: pgcrypto question

2019-10-07 Thread Erik Aronesty
Good idea for "psycopg".  It would be easy for a POC, but I think the
only meaningful layer to operate at would be a libpq drop-in
replacement that intercepts PQgetvalue, PQprepare, PQexecParams,
PQexecPrepared ... etc.   That way odbc, python, node, etc would "just
work" as long as you used LD_PRELOAD appropriately.

I never like building things that way though... it would be super cool
if libpq optionally supported client-side plugins too, but maybe some
day if there are more use cases.

On Mon, Oct 7, 2019 at 2:08 PM Tomas Vondra
 wrote:
>
> On Mon, Oct 07, 2019 at 12:05:16PM -0400, Erik Aronesty wrote:
> >Currently, it is my understanding that the pgcrypto library requires
> >the user to send a password or private key up to the server for
> >decryption.
> >
>
> Correct. In the naive case the key is included in each SQL query, which
> does have various issues. Bruce Momjian has a nice extension that allows
> you to fix that by loading the key into backend memory:
>
>   http://momjian.us/download/pgcryptokey/
>
> >Is there a notion of a client-side encrypt/decrypt plugin when doing a
> >postgres query?
> >
> >For example, a user could query postgres, get back data of type
> >"encrypted", and a "libpq" plugin could decode/decrypt those columns
> >that are of data type "encrypted" in a manner transparent to the
> >user of the client
> >
> >Obviously I could write this by intercepting the relevant libpq calls
> >using LD_PRELOAD or Microsoft's "Detours" ... but it there a better
> >way to do that?
> >
>
> AFAIk that's usually done at the application level, i.e. the application
> is sending/receiving encrypted data, and the database simply sees bytea
> columns. I'm not aware of a driver doing that transparently, but it
> seems like an interesting idea - I wonder if it could be done e.g. in
> psycopg as an extension, or something like that.
>
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: pgcrypto question

2019-10-07 Thread Tomas Vondra

On Mon, Oct 07, 2019 at 02:51:30PM -0400, Erik Aronesty wrote:

Good idea for "psycopg".  It would be easy for a POC, but I think the
only meaningful layer to operate at would be a libpq drop-in
replacement that intercepts PQgetvalue, PQprepare, PQexecParams,
PQexecPrepared ... etc.   That way odbc, python, node, etc would "just
work" as long as you used LD_PRELOAD appropriately.



It's not clear to me how would that know which columns are encrypted,
with what key, etc. Because those encrypted columns are essentially just
regular bytea columns, so there's no easy way to distinguish them.

I'm no psycopg2 expert, but it does have some infrastructure for casting
PostgreSQL types to Python types, and I guess that could be used for the
encryption.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Event Triggers and Dropping Objects

2019-10-07 Thread Bruce Momjian
On Sat, Oct  5, 2019 at 10:50:14AM +0200, Luca Ferrari wrote:
> On Fri, Oct 4, 2019 at 10:38 PM Miles Elam  wrote:
> >
> > The event trigger firing matrix lists tags like DROP TABLE and DROP 
> > FUNCTION are listed below the ddl_command_end event, but when I created a 
> > basic audit table and event trigger, they don't seem to fire. I know 
> > sql_drop exists, but again the matrix lists DROP commands in the 
> > ddl_command_end event.
> 
> Yes, I think this is a little misleading:
> .
> The ddl_command_end is issued, and the function is invoked, but
> pg_event_trigger_ddl_commands() returns NULL on such invocation
> because sql_drop is the event with the attached data.

Do the Postgres docs need improvement here?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: pgcrypto question

2019-10-07 Thread Erik Aronesty
Actually I found a nice open source product  "Acra" ... seems to do
the whole thing via a proxy.  Now I need to see if I can customize the
encryption enough using a plugin (but at least I can fork it and start
from there).   A proxy encryption system seems to be the right call,
then all my client apps can stay the same.


On Mon, Oct 7, 2019 at 3:49 PM Tomas Vondra
 wrote:
>
> On Mon, Oct 07, 2019 at 02:51:30PM -0400, Erik Aronesty wrote:
> >Good idea for "psycopg".  It would be easy for a POC, but I think the
> >only meaningful layer to operate at would be a libpq drop-in
> >replacement that intercepts PQgetvalue, PQprepare, PQexecParams,
> >PQexecPrepared ... etc.   That way odbc, python, node, etc would "just
> >work" as long as you used LD_PRELOAD appropriately.
> >
>
> It's not clear to me how would that know which columns are encrypted,
> with what key, etc. Because those encrypted columns are essentially just
> regular bytea columns, so there's no easy way to distinguish them.
>
> I'm no psycopg2 expert, but it does have some infrastructure for casting
> PostgreSQL types to Python types, and I guess that could be used for the
> encryption.
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




temporary files

2019-10-07 Thread dangal
Hello everyone, I have a question to see if you can help me, I have set
work_mem in 100 MB but I have the following in the pgbadger

Queries generating the most temporary files (N)

Count Total size   Min size  Max sizeAvg size
   58   3.24 GiB 57.15 MiB  57.15 MiB 57.15 MiB

How could it be that if you require 57 MB, 100 MB will not reach ?
thanks a lot



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Table locking during backup

2019-10-07 Thread Artur Zając
Hi,

 

I cannot reproduce some scenario  I found in my PostgreSQL logs.

 

I have two connections/processes:

 

Process 24755 is standard pg_backup connection with:

 

.

BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY

.

LOCK TABLE gm.tableabc;

.

COPY FROM gm.tableabc

 

Process 5969 is normal connection where I have:

 

LOG: proces 5969 still waiting for AccessExclusiveLock on relations
562888531 of database 16429 after 1000.066 ms

DETAILT: Process holding the lock: 24755. Wait queue: 5969

 

Query that is waiting (from proces 5969) is:

 

CREATE TEMP TABLE IF NOT EXISTS tableabc 

(

Id BIGINT DEFAULT gm.top()

) INHERITS (gm.tableabc);

 

I cannot reproduce what pg_dump is doing that causes waiting on proces 5969
(which lock is acquired and why). When pg_dump ends lock is released and
proces 5969 continues.

 

I know that doing in two parallel connections:

 

BEGIN;

 

CREATE TEMP TABLE IF NOT EXISTS tableabc 

(

Id BIGINT DEFAULT gm.top()

) INHERITS (gm.tableabc);

 

causes  waiting state on second connection until the first finished, but
pg_dump connection is read only. 

 

Could you suggest me which part of pg_dump (which query)  might cause that
behaviour.

 

Artur Zając



Re: temporary files

2019-10-07 Thread Tomas Vondra

On Mon, Oct 07, 2019 at 01:52:41PM -0700, dangal wrote:

Hello everyone, I have a question to see if you can help me, I have set
work_mem in 100 MB but I have the following in the pgbadger

Queries generating the most temporary files (N)

Count Total size   Min size  Max sizeAvg size
   58   3.24 GiB 57.15 MiB  57.15 MiB 57.15 MiB

How could it be that if you require 57 MB, 100 MB will not reach ?
thanks a lot



The on-disk and in-memory representations are not the same, and the
on-disk one is often significantly more efficient. Data that needs 57MB
temporary file may beed ~150MB memory to perform in-memory sort, for
example.

Yet another reason may be that e.g. hash join splits the memory into
batches, and each one has to fit into work_mem. And whevener we hit the
limit, we double the number of batches, i.e. we cut the batch size in
half. Thus the size is somewhere between 50 and 100MB, with ~75MB on
average. Furthermore, we also estimate the batch size before running the
query, so we may start with unnecessarily many batches, pushing the
average batch size down even more.


FWIW it's somewhat suspicious you have 58 temp files with almost no
variability in size. That seems as if a single query was executed
repeatedly. Maybe try looking into what query that is.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Table locking during backup

2019-10-07 Thread Tomas Vondra

On Mon, Oct 07, 2019 at 11:00:08PM +0200, Artur Zając wrote:

Hi,



I cannot reproduce some scenario  I found in my PostgreSQL logs.



I have two connections/processes:



Process 24755 is standard pg_backup connection with:



.

BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY

.

LOCK TABLE gm.tableabc;

.

COPY FROM gm.tableabc



Process 5969 is normal connection where I have:



LOG: proces 5969 still waiting for AccessExclusiveLock on relations
562888531 of database 16429 after 1000.066 ms

DETAILT: Process holding the lock: 24755. Wait queue: 5969



Query that is waiting (from proces 5969) is:



CREATE TEMP TABLE IF NOT EXISTS tableabc

(

Id BIGINT DEFAULT gm.top()

) INHERITS (gm.tableabc);



I cannot reproduce what pg_dump is doing that causes waiting on proces 5969
(which lock is acquired and why). When pg_dump ends lock is released and
proces 5969 continues.



I know that doing in two parallel connections:



BEGIN;



CREATE TEMP TABLE IF NOT EXISTS tableabc

(

Id BIGINT DEFAULT gm.top()

) INHERITS (gm.tableabc);



causes  waiting state on second connection until the first finished, but
pg_dump connection is read only.



Not sure why would it matter that the pg_dump connection is read-only,
this is about locking because pg_dump needs to ensure the schema does
not change while it's running.

pg_dump does not do

 LOCK TABLE gm.tableabc;

but

 LOCK TABLE gm.tableabc IN ACCESS SHARE MODE;

Which should be visible in pg_locks system view. And it does conflict
with the ACCESS EXCLUSIVE mode, used by the second query.




Could you suggest me which part of pg_dump (which query)  might cause that
behaviour.



It's this line:

https://github.com/postgres/postgres/blob/master/src/bin/pg_dump/pg_dump.c#L6676


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Performance on JSONB select

2019-10-07 Thread Michael Lewis
>
> My Json has always a status
>

Why declare the "where" clause when creating the index? It would not seem
needed if status is always set and so your index will reference all rows in
the table.

Thanks for sharing the trick of having the second column in the index
determine the key based on the first column. I don't know if/when I might
need this, but an interesting solution. Generically, it seems like the
below-

create index idx_mytable_jsonb_dependent_fields on public.mytable (
(JsonBField->>'primary_field_to_filter_on'),
JsonBField->(JsonBField->>'primary_field_to_filter_on')->>'secondary_field'
);


Re: JSONB maximal length ?

2019-10-07 Thread Michael Lewis
>
> Hi
>>
>>By inserting data in a JSONB type column I got the following error
>> message:
>>
>> *>> *
>>
>> *ERROR:  string too long to represent as jsonb string*
>>
>> *DETAIL:  Due to an implementation restriction, jsonb strings cannot
>> exceed 268435455 bytes.*
>>
>> *<< *
>>
>> could anyone confirm that there is a size limit for JSONB type fields ?
>>
>>
Have you looked at bytea datatype? I believe this would allow up to 1GB
which is the max file size so the maximum (minus some overhead) for a row.
Perhaps though, storing files of unusually large size should be done
outside of the database.


Re: Declarative Range Partitioning Postgres 11

2019-10-07 Thread Michael Lewis
No, what you want is not possible and probably won't ever be I would
expect. Scanning every partition to validate the primary key isn't scalable.


Re: Declarative Range Partitioning Postgres 11

2019-10-07 Thread Ron

On 10/7/19 6:17 PM, Michael Lewis wrote:

No, what you want is not possible and probably won't ever be I would expect.


Sure it is.  Maybe not the (weird) way that Postgres does partitioning, but 
the legacy RDBMS that I still occasionally maintain has for at least 25 
years had partition key independent of any indexes.



Scanning every partition to validate the primary key isn't scalable.


That's only because of the way Pg implements partitioning.

--
Angular momentum makes the world go 'round.




Re: PG 12 not yet for mac

2019-10-07 Thread Pankaj Jangid
Tomas Vondra  writes:

> On Mon, Oct 07, 2019 at 11:11:43AM -0400, Ravi Krishna wrote:
>>https://postgresapp.com/downloads.html
>>
>>The link which says PG 12 is actually PG 11.
>>
>
> Not sure if the link is correct or not (it seems to point to
> Postgres-2.3-12.dmg, which seems like it might be v12). But more
> imporantly, that's not a project/page managed by the PostgreSQL
> community, you need to report the issues to the authors (most likely
> through github issues).
>
I am waiting for the Homebrew cask to update. Eagerly waiting to test
the new features.

--
Regards,
Pankaj




Re: COPY command returns "ERROR: invalid XML content"

2019-10-07 Thread Konstantin Izmailov
Tomas, thank you for your reply! I cannot upload 2100+ xml files. Some of
them are huge.

I'm not sure if libpq is using libxml2 on Windows. In debugger I see very
strange behavior of pqsecure_write. It seems like it stops sending data
from provided buffer after 9,100,000 bytes.

I hoped that someone came across similar issue, and bring some insight. I
continue researching the issue.

On Mon, Oct 7, 2019 at 5:13 AM Tomas Vondra 
wrote:

> On Sun, Oct 06, 2019 at 08:45:40PM -0700, Konstantin Izmailov wrote:
> >Hi,
> >I'm using libpq (v10) to import lots of xml files into a PG10 table. I
> >noticed if number of records imported exceeds 2100 then the following
> error
> >is returned:
> >ERROR:  invalid XML content
> >DETAIL:  line 1: Couldn't find end of Start Tag
> >timeBasedFileNamingAndTriggerin line 1
>
> >logFile.%d{-MM-dd}.%i.html >
>
> My guess is this is an issue/limitation in libxml2, which we use to
> parse and process XML. What libxml2 version you have installed? Can you
> share an example of a XML document to reproduce the issue?
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: COPY command returns "ERROR: invalid XML content"

2019-10-07 Thread Konstantin Izmailov
Please ignore this thread. After several days of debugging I found bug in
my application. It was misalignment of data when internal buffers
reallocated. After the application fix it all works as expected. Sorry for
the false alarm.

On Mon, Oct 7, 2019 at 7:03 PM Konstantin Izmailov  wrote:

> Tomas, thank you for your reply! I cannot upload 2100+ xml files. Some of
> them are huge.
>
> I'm not sure if libpq is using libxml2 on Windows. In debugger I see very
> strange behavior of pqsecure_write. It seems like it stops sending data
> from provided buffer after 9,100,000 bytes.
>
> I hoped that someone came across similar issue, and bring some insight. I
> continue researching the issue.
>
> On Mon, Oct 7, 2019 at 5:13 AM Tomas Vondra 
> wrote:
>
>> On Sun, Oct 06, 2019 at 08:45:40PM -0700, Konstantin Izmailov wrote:
>> >Hi,
>> >I'm using libpq (v10) to import lots of xml files into a PG10 table. I
>> >noticed if number of records imported exceeds 2100 then the following
>> error
>> >is returned:
>> >ERROR:  invalid XML content
>> >DETAIL:  line 1: Couldn't find end of Start Tag
>> >timeBasedFileNamingAndTriggerin line 1
>>
>> >logFile.%d{-MM-dd}.%i.html> >
>>
>> My guess is this is an issue/limitation in libxml2, which we use to
>> parse and process XML. What libxml2 version you have installed? Can you
>> share an example of a XML document to reproduce the issue?
>>
>> regards
>>
>> --
>> Tomas Vondra  http://www.2ndQuadrant.com
>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>>
>


RE: Table locking during backup

2019-10-07 Thread Artur Zając


> Not sure why would it matter that the pg_dump connection is read-only, this 
> is about locking because 
> pg_dump needs to ensure the schema does not change while it's running.
> pg_dump does not do
>
>   LOCK TABLE gm.tableabc;
>
> but
>
>   LOCK TABLE gm.tableabc IN ACCESS SHARE MODE;
>
> Which should be visible in pg_locks system view. And it does conflict with 
> the ACCESS EXCLUSIVE mode, > used by the second query.

My mistake. I cited the log incorrectly - of course should be (and there was) 
LOCK TABLE gm.tableabc IN ACCESS SHARE MODE. 
After all, your answer does not explain everything. I tried to reproduce the 
minimum commands:

First session:

BEGIN; 
set transaction isolation level repeatable read, read only;
lock TABLE gm.tableabc IN access share mode;


Second session:
BEGIN;
CREATE TEMP TABLE IF NOT EXISTS tableabc (Id BIGINT DEFAULT random()) INHERITS 
(gm.tableabc);

"CREATE TEMP TABLE" does not wait for anything. Waiting state is only when I 
start third session with the same queries as in second.

Maybe "LOCK TABLE ... IN ACCESS SHARE MODE" is the answer, but with correlation 
with some other query or setting setted by pg_backup.

Table gm.tableabc is created with "create table gm.tableabc (x int default 
random());"

AZ