Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Laurenz Albe
On Tue, 2020-03-24 at 17:50 -0500, Chris Morris wrote:
> Is it even possible to use PG partitioning across a Foreign Server?

I am not certain what you mean, but you can have foreign tables as partitions
of a partitioned table.  The partitions won't be processed in parallel though.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: PostgreSQL10.x client Vs. PostgreSQL 11.x server

2020-03-25 Thread Jimmy Angelakos
Hi Deepti,

As Peter pointed out (and I should have clarified), the server-side
features of each version should work regardless of client.
So the pg11 client will support all of the features of pg10 server (should
be fully compatible).

Best regards,
Jimmy

Jimmy Angelakos
Senior PostgreSQL Architect
2ndQuadrant - PostgreSQL Solutions for the Enterprise
https://www.2ndQuadrant.com/


RE: PostgreSQL10.x client Vs. PostgreSQL 11.x server

2020-03-25 Thread Deepti Sharma S
Hi Jimmy,

If I would change my question, that if customer will be having PostgreSQL 
client 11.x version and PostgreSQL 10.x as server, will that be compatible?


[Ericsson]
DEEPTI SHARMA
Specialist
ITIL 2011 Foundation Certified
BDGS, R&D

Ericsson
3rd Floor, ASF Insignia - Block B Kings Canyon,
Gwal Pahari, Gurgaon, Haryana 122 003, India
Phone 0124-6243000
deepti.s.sha...@ericsson.com
www.ericsson.com

From: Jimmy Angelakos 
Sent: Friday, March 13, 2020 2:17 PM
To: Deepti Sharma S 
Subject: Re: PostgreSQL10.x client Vs. PostgreSQL 11.x server

Hi Deepti,
By using an older client, you will be missing out on the additional features 
that Postgres 11 (and its corresponding client) supports. By referring to the 
release notes, you can identify those:
https://www.postgresql.org/docs/release/11.0/
You can generally use an older client with a new release, but for the above 
reasons you should try to match server and client versions.

Best regards
Jimmy

Jimmy Angelakos
Senior PostgreSQL Architect
2ndQuadrant - PostgreSQL Solutions for the Enterprise
https://www.2ndQuadrant.com/


On Fri, 13 Mar 2020 at 08:19, Deepti Sharma S 
mailto:deepti.s.sha...@ericsson.com>> wrote:
Hello Team,

Can anyone help us to answer below query.


[Ericsson]
DEEPTI SHARMA
Specialist
ITIL 2011 Foundation Certified
BDGS, R&D

Ericsson
3rd Floor, ASF Insignia - Block B Kings Canyon,
Gwal Pahari, Gurgaon, Haryana 122 003, India
Phone 0124-6243000
deepti.s.sha...@ericsson.com
www.ericsson.com

From: Deepti Sharma S
Sent: Monday, March 2, 2020 3:20 PM
To: 
'postgres-disc...@mailman.lmera.ericsson.se'
 
mailto:postgres-disc...@mailman.lmera.ericsson.se>>
Subject: PostgreSQL10.x client Vs. PostgreSQL 11.x server

Hello Team,

Can you please confirm the compatibility of PostgreSQL 10.x client with 
PostgreSQL 11.x server.



[Ericsson]
DEEPTI SHARMA
Specialist
ITIL 2011 Foundation Certified
BDGS, R&D

Ericsson
3rd Floor, ASF Insignia - Block B Kings Canyon,
Gwal Pahari, Gurgaon, Haryana 122 003, India
Phone 0124-6243000
deepti.s.sha...@ericsson.com
www.ericsson.com



How to plpgsql scripting

2020-03-25 Thread Ekaterina Amez

Hi List,

I'm used to make my own scripts in Oracle plsql, Sql Server tsql... but 
I'm unable to make one simple script in Postgres.


Objective version is 8.4 (I know, I know... it's a legacy server, I'm 
planning upgrade this server as soon as I can).


I have a test server with 9.2 version where I've succesfully run this 
code from psql:


DO $$
DECLARE
   a integer := 10;
   b integer := 20;
   c integer;
BEGIN
   c := a + b;
    RAISE NOTICE'Value of c: %', c;
END $$;

But this syntax is (anonymous code block?) is available since 9.0 so I'm 
trying to adapt this to v8.4


A per documentation 
[https://www.postgresql.org/docs/8.4/plpgsql-structure.html], the 
structure of a code block is defined as:


[<> ]
[DECLARE declarations ]
BEGIN
statements
END [label ];

so I've adapted my code to:

DECLARE
   a integer;
   b integer;
   c integer;
BEGIN
    a := 10;
    b := 20;
    c := a + b;
    RAISE NOTICE'Value of c: %', c;
END ;

But when I run this from psql, both versions 8.4 and 9.2, all I get is:

testdb=# DECLARE
testdb-#    a integer;
ERROR:  syntax error at or near «integer»
LINE 2:    a integer;
  ^
testdb=#    b integer;
ERROR:  syntax error at or near «b»
LINE 1: b integer;
 ^
testdb=#    c integer;
ERROR:  syntax error at or near «c»
LINE 1: c integer;
 ^
testdb=# BEGIN
testdb-# a := 10;
ERROR:  syntax error at or near «a»
LINE 2: a := 10;
 ^
testdb=# b := 20;
ERROR:  syntax error at or near «b»
LINE 1: b := 20;
 ^
testdb=#    c := a + b;
ERROR:  syntax error at or near «c»
LINE 1: c := a + b;
 ^
testdb=# RAISE NOTICE'Value of c: %', c;
ERROR:  syntax error at or near «RAISE»
LINE 1: RAISE NOTICE'Value of c: %', c;
 ^
testdb=# END;
WARNING:  no hay una transacción en curso
COMMIT
testdb=#

NOTE: I've translated error messages myself.

What's wrong with the syntax? Or is not possible to make a script and I 
have to create a function to encapsulate my code?



Kind regards,

Ekaterina




Replacing Apache Solr with Postgre Full Text Search?

2020-03-25 Thread J2eeInside J2eeInside
Hi all,

I hope someone  can help/suggest:
I'm currently maintaining a project that uses Apache Solr /Lucene. To be
honest, I wold like to replace Solr with Postgre Full Text Search. However,
there is a huge amount of documents involved - arround 200GB. Wondering,
can Postgre handle this efficiently?
Does anyone have specific experience, and what should the infrastructure
look like?

P.S. Not to be confused, the Sol works just fine, i just wanted to
eliminate one component from the whole system (if Full text search can
replace Solr at all)


Re: How to plpgsql scripting

2020-03-25 Thread Pavel Stehule
st 25. 3. 2020 v 13:20 odesílatel Ekaterina Amez 
napsal:

> Hi List,
>
> I'm used to make my own scripts in Oracle plsql, Sql Server tsql... but
> I'm unable to make one simple script in Postgres.
>
> Objective version is 8.4 (I know, I know... it's a legacy server, I'm
> planning upgrade this server as soon as I can).
>
> I have a test server with 9.2 version where I've succesfully run this code
> from psql:
>
> DO $$
> DECLARE
>a integer := 10;
>b integer := 20;
>c integer;
> BEGIN
>c := a + b;
> RAISE NOTICE'Value of c: %', c;
> END $$;
>
> But this syntax is (anonymous code block?) is available since 9.0 so I'm
> trying to adapt this to v8.4
>

you cannot to do this.

If you want to use plpgsql in older releases, you should to write functions
and then run these functions.

A per documentation [
> https://www.postgresql.org/docs/8.4/plpgsql-structure.html], the
> structure of a code block is defined as:
>

yes, but the block must be wrapped by some SQL statement - in 8.4, by
CREATE OR REPLACE FUNCTION

[ <> ]
> [ DECLARE
> declarations ]
> BEGIN
> statements
> END [ label ];
>
> so I've adapted my code to:
>
> DECLARE
>a integer;
>b integer;
>c integer;
> BEGIN
> a := 10;
> b := 20;
> c := a + b;
> RAISE NOTICE'Value of c: %', c;
> END ;
>
> But when I run this from psql, both versions 8.4 and 9.2, all I get is:
>
> testdb=# DECLARE
> testdb-#a integer;
> ERROR:  syntax error at or near «integer»
> LINE 2:a integer;
>   ^
> testdb=#b integer;
> ERROR:  syntax error at or near «b»
> LINE 1: b integer;
>  ^
> testdb=#c integer;
> ERROR:  syntax error at or near «c»
> LINE 1: c integer;
>  ^
> testdb=# BEGIN
> testdb-# a := 10;
> ERROR:  syntax error at or near «a»
> LINE 2: a := 10;
>  ^
> testdb=# b := 20;
> ERROR:  syntax error at or near «b»
> LINE 1: b := 20;
>  ^
> testdb=#c := a + b;
> ERROR:  syntax error at or near «c»
> LINE 1: c := a + b;
>  ^
> testdb=# RAISE NOTICE'Value of c: %', c;
> ERROR:  syntax error at or near «RAISE»
> LINE 1: RAISE NOTICE'Value of c: %', c;
>  ^
> testdb=# END;
> WARNING:  no hay una transacción en curso
> COMMIT
> testdb=#
>
> NOTE: I've translated error messages myself.
>
> What's wrong with the syntax? Or is not possible to make a script and I
> have to create a function to encapsulate my code?
>

just this is not supported feature.

You have some special reason why you use 8.4? It's pretty old unsupported
version.

Regards

Pavel

>
> Kind regards,
>
> Ekaterina
>
>
>


PLPGSQL: DECLARE more variable with same type at once

2020-03-25 Thread Durumdara
Hello!

I want to ask how to shortening code?

Delta double precision;
UjErtek double precision;
UjDErtek double precision;

For example - but this makes an error:
Delta, UjErtek, UjDErtek double precision;

Do you know any other form?

Thank you for it!

Best regards
   dd


Re: How to plpgsql scripting

2020-03-25 Thread David G. Johnston
On Wednesday, March 25, 2020, Ekaterina Amez 
wrote:
>
> What's wrong with the syntax? Or is not possible to make a script and I
> have to create a function to encapsulate my code?
>
This not working exactly the reason the “DO” command was created.

David J.


How to query "primary_slot_name" in slave server?

2020-03-25 Thread Edson Richter
Hi!

I've been searching over the net, but cannot find a reference.
I know that in server I can execute "select * from pg_replication_slots" and 
get all created slots.
But how can I query slave to know wich primay slot it is connecting to?

Thanks in advance,

Edson


Re: PLPGSQL: DECLARE more variable with same type at once

2020-03-25 Thread David G. Johnston
On Wednesday, March 25, 2020, Durumdara  wrote:

> Hello!
>
> I want to ask how to shortening code?
>
> Delta double precision;
> UjErtek double precision;
> UjDErtek double precision;
>
> For example - but this makes an error:
> Delta, UjErtek, UjDErtek double precision;
>
> Do you know any other form?
>
>
No other form is specified in the documentation.

David J.


Re: How to plpgsql scripting

2020-03-25 Thread Ekaterina Amez
El mié., 25 mar. 2020 a las 13:42, Pavel Stehule ()
escribió:

>
> just this is not supported feature.
>

I was affraid this was going to be the answer.. sigh


> You have some special reason why you use 8.4? It's pretty old unsupported
> version.
>

As I said: legacy server. I'm planning upgrade it.


> Regards
>
> Pavel
>
>>
>> Kind regards,
>>
>> Ekaterina
>>
>>
>>


Re: PLPGSQL: DECLARE more variable with same type at once

2020-03-25 Thread Pavel Stehule
st 25. 3. 2020 v 13:53 odesílatel Durumdara  napsal:

> Hello!
>
> I want to ask how to shortening code?
>
> Delta double precision;
> UjErtek double precision;
> UjDErtek double precision;
>
> For example - but this makes an error:
> Delta, UjErtek, UjDErtek double precision;
>
> Do you know any other form?
>

Short form not supported, because it is not supported in PLpgSQL origin -
PL/SQL

Regards

Pavel

>
> Thank you for it!
>
> Best regards
>dd
>


Re: How to plpgsql scripting

2020-03-25 Thread Ekaterina Amez
El mié., 25 mar. 2020 a las 13:54, David G. Johnston (<
david.g.johns...@gmail.com>) escribió:

> On Wednesday, March 25, 2020, Ekaterina Amez 
> wrote:
>>
>> What's wrong with the syntax? Or is not possible to make a script and I
>> have to create a function to encapsulate my code?
>>
> This not working exactly the reason the “DO” command was created.
>

Thank you David, I was beginning to suppose this.


> David J.
>


PLPGSQL: when the local variable used and when the table field?

2020-03-25 Thread Durumdara
Hello!

PLPGSQL allows me to write simple queries and updates without execute +
using.

F.e:
DECLARE t text; anytype text;
BEGIN
...
  select nev into t from anytable where type = anytype;
...
  insert into bla (id, name, type)
  select id, name, anytype from bla
...

But this method is seems to be unsafe  for me.
How the compiler knows what value I want to use?

If bla table has "anytype" field, it can use that field, or it can use
local variable too.
Which has more priority/precedency?

For example:
1.) I have this proc (installed):
select nev into t from anytable where type = anytype;
Later I add anytype column to anytable.

2.) Or this:
  insert into bla (id, name, type)
  select id, name, anytype from bla
Later I add anytype to bla.

What would happen?
The working procedure makes error, because it have more possible source of
data (local variable, and table field)?
Or it uses the local variable, because in the scope it has more precedency?

Ok, sometimes I can use prefix for table:
  select bla.id, bla.name, anytype from bla

But when I WANT to use local variable (as constant) and later the bla
extended with anytype column, it would be source of conflict.

Somewhere I force to use "_" prefix for local variables to be sure in
result.

Do you know any info about the background?

Thanks for answer!

Best  regards

   dd


Re: PLPGSQL: when the local variable used and when the table field?

2020-03-25 Thread David G. Johnston
On Wed, Mar 25, 2020 at 6:09 AM Durumdara  wrote:

>
> What would happen?
> The working procedure makes error, because it have more possible source of
> data (local variable, and table field)?
> Or it uses the local variable, because in the scope it has more precedency?
> [...]
>
> Do you know any info about the background?
>

https://www.postgresql.org/docs/12/plpgsql-implementation.html#PLPGSQL-VAR-SUBST

David J.


Re: Replacing Apache Solr with Postgre Full Text Search?

2020-03-25 Thread Mike Rylander
On Wed, Mar 25, 2020 at 8:37 AM J2eeInside J2eeInside
 wrote:
>
> Hi all,
>
> I hope someone  can help/suggest:
> I'm currently maintaining a project that uses Apache Solr /Lucene. To be 
> honest, I wold like to replace Solr with Postgre Full Text Search. However, 
> there is a huge amount of documents involved - arround 200GB. Wondering, can 
> Postgre handle this efficiently?
> Does anyone have specific experience, and what should the infrastructure look 
> like?
>
> P.S. Not to be confused, the Sol works just fine, i just wanted to eliminate 
> one component from the whole system (if Full text search can replace Solr at 
> all)

I'm one of the core developers (and the primary developer of the
search subsystem) for the Evergreen ILS [1] (integrated library system
-- think book library, not software library).  We've been using PGs
full-text indexing infrastructure since day one, and I can say it is
definitely capable of handling pretty much anything you can throw at
it.

Our indexing requirements are very complex and need to be very
configurable, and need to include a lot more than just "search and
rank a text column," so we've had to build a ton of infrastructure
around record (document) ingest, searching/filtering, linking, and
display.  If your indexing and search requirements are stable,
specific, and well-understood it should be straight forward,
especially if you don't have to take into account non-document
attributes like physical location, availability, and arbitrary
real-time visibility rules like Evergreen does.

As for scale, it's more about document count than total size.  There
are Evergreen libraries with several million records to search, and
with proper hardware and tuning everything works well.  Our main
performance issue has to do with all of the stuff outside the records
(documents) themselves that have to be taken into account during
search.  The core full-text search part of our queries is extremely
performant, and has only gotten better over the years.

[1] http://evergreen-ils.org

HTH,
--
Mike Rylander
 | Executive Director
 | Equinox Open Library Initiative
 | phone:  1-877-OPEN-ILS (673-6457)
 | email:  mi...@equinoxinitiative.org
 | web:  http://equinoxinitiative.org




Re: How to query "primary_slot_name" in slave server?

2020-03-25 Thread Paul Förster
Hi Edson,

query the following on the replica:

pg_stat_wal_receiver

Cheers,
Paul

> On 25. Mar, 2020, at 13:59, Edson Richter  wrote:
> 
> Hi!
> 
> I've been searching over the net, but cannot find a reference.
> I know that in server I can execute "select * from pg_replication_slots" and 
> get all created slots.
> But how can I query slave to know wich primay slot it is connecting to?
> 
> Thanks in advance,
> 
> Edson





Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-25 Thread Tom Lane
Marius Andreiana  writes:
> How about enabling users to do more advanced processing of JSON
> data inside the database itself, using JavaScript?
> There's the PLV8 extension https://github.com/plv8/plv8
> but it's not available for many major db providers (eg Heroku), making it
> hard to adopt it. Having it supported in the standard PostgreSQL
> distribution, besides the existing 4 languages (
> https://www.postgresql.org/docs/12/xplang.html), would be awesome!

The core Postgres project doesn't have unlimited resources --- in fact,
most of the time we feel desperately short of qualified developers.
So we're not eager to pick up the maintenance burden for extensions
that seem to be doing perfectly well on their own.  (The extensions
that do exist within the core distribution are largely there to make
sure we don't break extensibility by accident.)

Even if we did that, it would not automatically translate to downstream
packagers enabling the feature; they might not want the additional
dependencies.

So the right thing for you to do is to lobby Heroku to include PLV8
in their offering.  You'd have to convince them to support it in
any case.

regards, tom lane




RE: How to query "primary_slot_name" in slave server?

2020-03-25 Thread Edson Richter
De: Paul Förster 
Enviado: quarta-feira, 25 de março de 2020 11:42
Para: Edson Richter 
Cc: pgsql-general 
Assunto: Re: How to query "primary_slot_name" in slave server?

Hi Edson,

query the following on the replica:

pg_stat_wal_receiver


Perfect, thanks!

Edson



Cheers,
Paul

> On 25. Mar, 2020, at 13:59, Edson Richter  wrote:
>
> Hi!
>
> I've been searching over the net, but cannot find a reference.
> I know that in server I can execute "select * from pg_replication_slots" and 
> get all created slots.
> But how can I query slave to know wich primay slot it is connecting to?
>
> Thanks in advance,
>
> Edson



Re: PG12 autovac issues

2020-03-25 Thread Justin King
All-

This started happening again.  DEBUG1 is enabled:

Mar 25 14:48:03 cowtn postgres[39720]: [35294-1] 2020-03-25
14:48:03.972 GMT [39720] DEBUG:  autovacuum: processing database
"template0"
Mar 25 14:48:06 cowtn postgres[39735]: [35294-1] 2020-03-25
14:48:06.545 GMT [39735] DEBUG:  autovacuum: processing database
"postgres"
Mar 25 14:48:11 cowtn postgres[39759]: [35294-1] 2020-03-25
14:48:11.284 GMT [39759] DEBUG:  autovacuum: processing database
"template1"
Mar 25 14:48:14 cowtn postgres[39772]: [35294-1] 2020-03-25
14:48:14.564 GMT [39772] DEBUG:  autovacuum: processing database
"feedi"
Mar 25 14:48:14 cowtn postgres[39772]: [35295-1] 2020-03-25
14:48:14.588 GMT [39772] LOG:  automatic vacuum of table
"feedi.pg_catalog.pg_statistic": index scans: 1
Mar 25 14:48:14 cowtn postgres[39772]: [35295-2] #011pages: 0 removed,
117 remain, 0 skipped due to pins, 0 skipped frozen
Mar 25 14:48:14 cowtn postgres[39772]: [35295-3] #011tuples: 477
removed, 704 remain, 0 are dead but not yet removable, oldest xmin:
189591147
Mar 25 14:48:14 cowtn postgres[39772]: [35295-4] #011buffer usage: 315
hits, 0 misses, 0 dirtied
Mar 25 14:48:14 cowtn postgres[39772]: [35295-5] #011avg read rate:
0.000 MB/s, avg write rate: 0.000 MB/s
Mar 25 14:48:14 cowtn postgres[39772]: [35295-6] #011system usage:
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
Mar 25 14:48:15 cowtn postgres[39772]: [35296-1] 2020-03-25
14:48:15.606 GMT [39772] LOG:  automatic vacuum of table
"feedi.production.flightplans": index scans: 1
Mar 25 14:48:15 cowtn postgres[39772]: [35296-2] #011pages: 0 removed,
61331 remain, 0 skipped due to pins, 29772 skipped frozen
Mar 25 14:48:15 cowtn postgres[39772]: [35296-3] #011tuples: 21807
removed, 509983 remain, 1581 are dead but not yet removable, oldest
xmin: 189591147
Mar 25 14:48:15 cowtn postgres[39772]: [35296-4] #011buffer usage:
113684 hits, 0 misses, 1 dirtied
Mar 25 14:48:15 cowtn postgres[39772]: [35296-5] #011avg read rate:
0.000 MB/s, avg write rate: 0.008 MB/s
Mar 25 14:48:15 cowtn postgres[39772]: [35296-6] #011system usage:
CPU: user: 0.64 s, system: 0.12 s, elapsed: 1.00 s
Mar 25 14:48:16 cowtn postgres[39772]: [35297-1] 2020-03-25
14:48:16.537 GMT [39772] LOG:  automatic analyze of table
"feedi.production.flightplans" system usage: CPU: user: 0.83 s,
system: 0.03 s, elapsed: 0.93 s
Mar 25 14:48:16 cowtn postgres[39772]: [35298-1] 2020-03-25
14:48:16.627 GMT [39772] LOG:  automatic vacuum of table
"feedi.production.tita": index scans: 1
Mar 25 14:48:16 cowtn postgres[39772]: [35298-2] #011pages: 0 removed,
1711 remain, 0 skipped due to pins, 0 skipped frozen
Mar 25 14:48:16 cowtn postgres[39772]: [35298-3] #011tuples: 5936
removed, 75280 remain, 1079 are dead but not yet removable, oldest
xmin: 189591147
Mar 25 14:48:16 cowtn postgres[39772]: [35298-4] #011buffer usage:
5748 hits, 0 misses, 1 dirtied
Mar 25 14:48:16 cowtn postgres[39772]: [35298-5] #011avg read rate:
0.000 MB/s, avg write rate: 0.100 MB/s
Mar 25 14:48:16 cowtn postgres[39772]: [35298-6] #011system usage:
CPU: user: 0.06 s, system: 0.00 s, elapsed: 0.07 s
Mar 25 14:48:16 cowtn postgres[39772]: [35299-1] 2020-03-25
14:48:16.959 GMT [39772] LOG:  automatic analyze of table
"feedi.production.tita" system usage: CPU: user: 0.32 s, system: 0.00
s, elapsed: 0.33 s
Mar 25 14:48:16 cowtn postgres[39772]: [35300-1] 2020-03-25
14:48:16.969 GMT [39772] LOG:  automatic analyze of table
"feedi.production.virtual_clocks" system usage: CPU: user: 0.00 s,
system: 0.00 s, elapsed: 0.00 s
Mar 25 14:48:18 cowtn postgres[39790]: [35294-1] 2020-03-25
14:48:18.975 GMT [39790] DEBUG:  autovacuum: processing database
"template0"
Mar 25 14:48:21 cowtn postgres[39799]: [35294-1] 2020-03-25
14:48:21.546 GMT [39799] DEBUG:  autovacuum: processing database
"postgres"
Mar 25 14:48:22 cowtn postgres[39853]: [35294-1] 2020-03-25
14:48:22.446 GMT [39853] LOG:  connection received: host=10.4.4.11
port=25424
Mar 25 14:48:22 cowtn postgres[39853]: [35295-1] 2020-03-25
14:48:22.451 GMT [39853] LOG:  connection authorized: user=feedi
database=feedi SSL enabled (protocol=TLSv1.3,
cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)
Mar 25 14:48:26 cowtn postgres[39875]: [35294-1] 2020-03-25
14:48:26.277 GMT [39875] DEBUG:  autovacuum: processing database
"postgres"
Mar 25 14:48:26 cowtn postgres[39875]: [35295-1] 2020-03-25
14:48:26.298 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
wraparound of table "postgres.pg_catalog.pg_authid"
Mar 25 14:48:26 cowtn postgres[39875]: [35296-1] 2020-03-25
14:48:26.309 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
wraparound of table "postgres.pg_catalog.pg_subscription"
Mar 25 14:48:26 cowtn postgres[39875]: [35297-1] 2020-03-25
14:48:26.319 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
wraparound of table "postgres.pg_catalog.pg_database"
Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
14:48:26.329 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
wraparound of table "postg

Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-25 Thread Marius Andreiana
Thanks Tom, that makes sense. Appreciate your time to explain the context.

I'll followup with Heroku.

Have a peaceful day,
Marius


On Wed, Mar 25, 2020 at 4:50 PM Tom Lane  wrote:

> Marius Andreiana  writes:
> > How about enabling users to do more advanced processing of JSON
> > data inside the database itself, using JavaScript?
> > There's the PLV8 extension https://github.com/plv8/plv8
> > but it's not available for many major db providers (eg Heroku), making it
> > hard to adopt it. Having it supported in the standard PostgreSQL
> > distribution, besides the existing 4 languages (
> > https://www.postgresql.org/docs/12/xplang.html), would be awesome!
>
> The core Postgres project doesn't have unlimited resources --- in fact,
> most of the time we feel desperately short of qualified developers.
> So we're not eager to pick up the maintenance burden for extensions
> that seem to be doing perfectly well on their own.  (The extensions
> that do exist within the core distribution are largely there to make
> sure we don't break extensibility by accident.)
>
> Even if we did that, it would not automatically translate to downstream
> packagers enabling the feature; they might not want the additional
> dependencies.
>
> So the right thing for you to do is to lobby Heroku to include PLV8
> in their offering.  You'd have to convince them to support it in
> any case.
>
> regards, tom lane
>


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Chris Morris
>
> > Is it even possible to use PG partitioning across a Foreign Server?
> I am not certain what you mean, but you can have foreign tables as
> partitions
> of a partitioned table.  The partitions won't be processed in parallel
> though.


I have a large, growing table, that I'd like to start partitioning, but
also would like "older" partitions to be stored in a separate database,
connected via FDW. I haven't played around with partitioning at all yet, so
clearly I'm not sure how to ask the question :)

The hope is to still have one "seamless" table users can query, but to
spread the storage across different databases. I realize that may be asking
for too much.


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Stephen Frost
Greetings,

* Laurenz Albe (laurenz.a...@cybertec.at) wrote:
> On Tue, 2020-03-24 at 17:50 -0500, Chris Morris wrote:
> > Is it even possible to use PG partitioning across a Foreign Server?
> 
> I am not certain what you mean, but you can have foreign tables as partitions
> of a partitioned table.  The partitions won't be processed in parallel though.

Not yet..  There is ongoing work to make that happen though.

Also, accesses through the partitioned table to the foreign tables can
happen in parallel, of course, just has to be through different
connections to the main database.  This makes it reasonable to consider
using a partitioned table across foreign tables for queries that are
pulling back a small set of records, ideally based on the partition key
so that only the one foreign table that has the data you need is
queried, but it's not so good for large analytical type of workloads
where you want to run something across all of the partitions in
parallel (and in parallel on each of the partitions, etc).

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Chris Morris
>
> Not yet..  There is ongoing work to make that happen though.


Glad to hear it. :) Thx.


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Michael Lewis
Chris,
Does it actually need to be a different server and database, or would it be
possible to have another storage device added to your existing database and
make use of tablespaces to accomplish pseudo-archive of older partitions?
Just a thought.

>


Re: PG 12: Partitioning across a FDW?

2020-03-25 Thread Chris Morris
Right now my dbs are hosted by Heroku, so I doubt I have any control over
the dbs at that level.

Thanks for the idea though! :)

On Wed, Mar 25, 2020 at 12:04 PM Michael Lewis  wrote:

> Chris,
> Does it actually need to be a different server and database, or would it
> be possible to have another storage device added to your existing database
> and make use of tablespaces to accomplish pseudo-archive of older
> partitions? Just a thought.
>
>>


Re: Duplicate key violation on upsert

2020-03-25 Thread Matt Magoffin

> On 23/03/2020, at 1:10 PM, Adrian Klaver  wrote:
> 
> So the query is in the function solardatum.store_datum()?

> 
> If so what is it doing?

Yes. This function first performs the INSERT INTO the solardatum.da_datum table 
that we’re discussing here; then it inserts into two different tables. If it 
helps, the actual SQL is available here:

https://github.com/SolarNetwork/solarnetwork-central/blob/4fa585929a5526187ade0e842c809837647c6a41/solarnet-db-setup/postgres/postgres-init-generic-datum-x-functions.sql#L203-L242

> And could you capture the values and pass them to a RAISE NOTICE?

It would take me some time to get that change deployed. If I was able to, what 
information do you think would be helpful here, e.g. that jdata_a is NULL or 
not, or something else?

The duplicate key violation occurs infrequently, and it does seem appropriate 
to drop the UNIQUE constraint on the da_datum_x_acc_idx given uniqueness is 
really only wanted on (node_id, ts, source_id). As long as I can confirm that 
query performance doesn’t decrease, I’d like to recreate the index without 
UNIQUE. Then I’m hoping this problem, whatever the cause, goes away.

— m@

Re: PostgreSQL 13: native JavaScript Procedural Language support ?

2020-03-25 Thread Bruce Momjian
On Wed, Mar 25, 2020 at 05:46:27PM +0200, Marius Andreiana wrote:
> Thanks Tom, that makes sense. Appreciate your time to explain the context.
> 
> I'll followup with Heroku.

Also, I have heard PL/V8 is very hard to build for packagers (because of
changes by Google in the way V8 is packaged), which has decreased PL/V8
adoption.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +




Re: PG12 autovac issues

2020-03-25 Thread Michael Paquier
On Wed, Mar 25, 2020 at 10:39:17AM -0500, Justin King wrote:
> This started happening again.  DEBUG1 is enabled:

Thanks for enabling DEBUG1 logs while this happened.

> Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
> 14:48:26.329 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> wraparound of table "postgres.pg_catalog.pg_tablespace"
> Mar 25 14:48:26 cowtn postgres[39875]: [35299-1] 2020-03-25
> 14:48:26.339 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> wraparound of table "postgres.pg_catalog.pg_auth_members"
> Mar 25 14:48:26 cowtn postgres[39875]: [35300-1] 2020-03-25
> 14:48:26.350 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> wraparound of table "postgres.pg_catalog.pg_replication_origin"

Are you seeing such log entries happening multiple times for the same
relations, meaning that autovacuum workers are just looping on the
same relations all over again?  This part of the logs point to catalog
tables, but are there other tables within your system facing the same
logs, particularly the database "feedi" with some of your own tables?

>  postgres=# SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM
> pg_class WHERE relfrozenxid <> 0 ORDER BY age(relfrozenxid) DESC LIMIT
> 1;
> oid|age| relfrozenxid
> ---+---+--
>  pg_authid | 202793549 |   4284570172

Ugh.  I think that this is exactly the thing I was suspecting
previously:
- The database stats look sane.
- The relation stats don't look good and visibly are put in such a
state that only one type of jobs gets triggered (non-aggressive but
anti-wraparound), which just keep being skipped and the relation stats
don't get refreshed.  (Such autovacuum jobs should never happen and we
have some underlying issues that will need separate care).

If you still have the cluster in this current state (perhaps you are
not able to keep it longer), could you provide more data about
pg_class.relfrozenxid for the tables which are mentioned in the logs
of the type "skipping redundant vacuum to prevent of table"?

> Let me know if there's anything else useful I can provide.

Thanks!
--
Michael


signature.asc
Description: PGP signature


Re: PG12 autovac issues

2020-03-25 Thread Andres Freund
Hi,

On 2020-03-26 10:43:36 +0900, Michael Paquier wrote:
> On Wed, Mar 25, 2020 at 10:39:17AM -0500, Justin King wrote:
> > Mar 25 14:48:26 cowtn postgres[39875]: [35298-1] 2020-03-25
> > 14:48:26.329 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_catalog.pg_tablespace"
> > Mar 25 14:48:26 cowtn postgres[39875]: [35299-1] 2020-03-25
> > 14:48:26.339 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_catalog.pg_auth_members"
> > Mar 25 14:48:26 cowtn postgres[39875]: [35300-1] 2020-03-25
> > 14:48:26.350 GMT [39875] DEBUG:  skipping redundant vacuum to prevent
> > wraparound of table "postgres.pg_catalog.pg_replication_origin"

FWIW, this kind of thing is why I think the added skipping logic is a
bad idea. Silently skipping things like this (same with the "bogus"
logic in datfrozenxid computation) is dangerous. I think we should
seriously consider backing this change out.

And if not, then we should at least include enough detail in the message
to be able to debug this.


> >  postgres=# SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM
> > pg_class WHERE relfrozenxid <> 0 ORDER BY age(relfrozenxid) DESC LIMIT
> > 1;
> > oid|age| relfrozenxid
> > ---+---+--
> >  pg_authid | 202793549 |   4284570172
> 
> Ugh.  I think that this is exactly the thing I was suspecting
> previously:
> - The database stats look sane.
> - The relation stats don't look good and visibly are put in such a
> state that only one type of jobs gets triggered (non-aggressive but
> anti-wraparound), which just keep being skipped and the relation stats
> don't get refreshed.  (Such autovacuum jobs should never happen and we
> have some underlying issues that will need separate care).

Hm. Why is this a state that is clearly invalid compared to pg_database?
Seems to precisely match

> postgres=# SELECT datname, age(datfrozenxid), datfrozenxid FROM
> pg_database ORDER BY age(datfrozenxid) DESC LIMIT 1;
>  datname  |age| datfrozenxid
> --+---+--
>  postgres | 202773709 |   4284570172

And why should this lead to anti-wraparound vacuums not happening? This
is older than the the cutoff age?

xid 4284570172 having the age of 202 million xids suggests that
ReadNewTransactionId() is approx 192376585. Which comports with the log
saying: oldest xmin: 189591147.


Or are you saying that you conclude that the relcache entry is somehow
out of date? It sure is interesting that all of the tables that hit the
"skipping redundant vacuum" condition are shared tables.

Greetings,

Andres Freund




Re: PG12 autovac issues

2020-03-25 Thread Michael Paquier
On Wed, Mar 25, 2020 at 07:59:56PM -0700, Andres Freund wrote:
> FWIW, this kind of thing is why I think the added skipping logic is a
> bad idea. Silently skipping things like this (same with the "bogus"
> logic in datfrozenxid computation) is dangerous. I think we should
> seriously consider backing this change out.

That's actually what I would like to do at this stage as a first
step.  It looks pretty clear that it does not help.

> And if not, then we should at least include enough detail in the message
> to be able to debug this.

Sure.  In any attempt I have done until now I was easily able to skip
some jobs, but it should get easier with a higher number of concurrent
workers and a higher number of relations heavily updated.  Thinking
about it, only catalog jobs were getting skipped in my own runs...

>> postgres=# SELECT datname, age(datfrozenxid), datfrozenxid FROM
>> pg_database ORDER BY age(datfrozenxid) DESC LIMIT 1;
>>  datname  |age| datfrozenxid
>> --+---+--
>>  postgres | 202773709 |   4284570172
> 
> And why should this lead to anti-wraparound vacuums not happening? This
> is older than the the cutoff age?
> 
> xid 4284570172 having the age of 202 million xids suggests that
> ReadNewTransactionId() is approx 192376585. Which comports with the log
> saying: oldest xmin: 189591147.

Oops, sorry.  My previous email was incorrect.  It looked strange to
not see datfrozenxid being refreshed.

> Or are you saying that you conclude that the relcache entry is somehow
> out of date? It sure is interesting that all of the tables that hit the
> "skipping redundant vacuum" condition are shared tables.

Yeah, that's actually what I was thinking yesterday.  In
heap_vacuum_rel(), xidFullScanLimit may be calculated right, but an
incorrect value of rd_rel->relminmxid or rd_rel->relfrozenxid could
lead to a job to become not aggressive.  It should be actually easy
enough to check that.
--
Michael


signature.asc
Description: PGP signature


Re: Duplicate key violation on upsert

2020-03-25 Thread Adrian Klaver

On 3/25/20 5:23 PM, Matt Magoffin wrote:


On 23/03/2020, at 1:10 PM, Adrian Klaver > wrote:


So the query is in the function solardatum.store_datum()?

If so what is it doing?


Yes. This function first performs the INSERT INTO the 
solardatum.da_datum table that we’re discussing here; then it inserts 
into two different tables. If it helps, the actual SQL is available here:


https://github.com/SolarNetwork/solarnetwork-central/blob/4fa585929a5526187ade0e842c809837647c6a41/solarnet-db-setup/postgres/postgres-init-generic-datum-x-functions.sql#L203-L242


I told see anything wrong at first glance, but is getting late here. I 
will take another look in the morning.





And could you capture the values and pass them to a RAISE NOTICE?


It would take me some time to get that change deployed. If I was able 
to, what information do you think would be helpful here, e.g. that 
jdata_a is NULL or not, or something else?


The values for (node_id, ts, source_id, jdata_a) as they compromise the 
UNIQUE values for da_datum_pkey and da_datum_x_acc_idx.




The duplicate key violation occurs infrequently, and it does seem 
appropriate to drop the UNIQUE constraint on the da_datum_x_acc_idx 
given uniqueness is really only wanted on (node_id, ts, source_id). As 
long as I can confirm that query performance doesn’t decrease, I’d like 
to recreate the index without UNIQUE. Then I’m hoping this problem, 
whatever the cause, goes away.


— m@



--
Adrian Klaver
adrian.kla...@aklaver.com