Re: Data entry / data editing tools (more end-user focus).

2019-05-23 Thread Tony Shelver
I looked at quite a few options.   Some constraints on my side that our
direction is open source, with Linux development and servers.
Radzen is .NET:  I could just as well use MS Access to cobble together a
front end.

CUBA and OpenXava are Java based and seem to require writing Java for
logic: I last used nearly 20 years ago and 'fast' development and Java IMHO
is an oxymoron.

Aurelia looks a bit promising, but I am not sure if it gains anything over
the current crop of JS libraries and frameworks, such as Vue, React et al,
and then libraries / frameworks built on top of those such as  Nuxt / Vue,
Quasar / Vue or Vuetify / Vue, which seem to have far more activity on
Github.

I managed to set up a quick and dirty front end using LibreOffice Base over
a weekend, next iteration i will probably move over to a Vue framework,
probably using Quasar..

On Sat, 18 May 2019 at 00:26, Stefan Keller  wrote:

> Dear all
>
> What about following „Rapid App Development Tools"?
> * OpenXava (Java): https://www.openxava.org/ate/visual-studio-lightswitch
> * Radzen (.NET):
> https://www.radzen.com/visual-studio-lightswitch-alternative/
> * Other: https://aurelia.io/ (JS) or CUBA https://www.cuba-platform.com/
> (Java)
>
> :Stefan
>
> Am Do., 28. März 2019 um 15:39 Uhr schrieb Adrian Klaver
> :
> >
> > On 3/27/19 11:49 PM, Tony Shelver wrote:
> >
> > Please reply to list also, more eyes on the the problem.
> > Ccing list
> >
> > My take on below is since you are feeding a Website why not use Web
> > technologies for your data entry. My language of choice is Python. I
> > have done something similar to this(on small scale) using the Django
> > framework. For something lighter weight there is Flask. Then your client
> > becomes a browser and you do not have to distribute forms around. You
> > could integrate with the existing Web apps you are using e.g. SnipCart.
> >
> >
> > > Actually I found a possibility.  LibreOffice Base on top of PG lets me
> > > paste photos into a Postgresql bytea field no problem.  MS Access
> should
> > > work well also, but I am not going to buy it, and running Ubuntu most
> of
> > > the time.
> > > Possibly will distribute the Base forms to select users to enter data.
> > > We are a startup company, so this is an affordable temporary fix, until
> > > the product I have been looking at matures, or we can roll our own.
> > >
> > > We are building a company website, including an eStore, and have a few
> > > hundred products to load and maintain. Our product data currently isn't
> > > suitable for a sales catalog.
> > > (Brands, categories, products, pricing and deal info, specials, images,
> > > product comparisons and so on).
> > >
> > > Right now I input / maintain this via CSV files maintained through a
> > > spreadsheet  (LibreOffice Calc) which our site generator (Jekyll) uses
> > > to build out the static HTML product [pages automatically.
> > > This is really quick to enter basic data, but I have to manually
> > > maintain image uploads, image names and so on manually in the
> > > spreadsheet and through individual file uploads. We have at least one,
> > > preferably 3 and up to 6 photos per product to maintain.  Call it a
> 1000
> > > images right now, and that will only go up.
> > > Invalid text / characters in product descriptions and so on can break
> > > the CSV as well.
> > >
> > > There are headless CMS solutions out on the market targeting this same
> > > area, but for various reasons the suitable ones are still maturing and
> > > shaking out in the marketplace, so I am not in a hurry to make a
> choice.
> > >
> > > So the goal is to replace CSV with JSON file input.  This will also
> make
> > > my life easier for more complex structures such as multiple categories
> > > and specs per product.
> > > I also want to migrate text that can change from the HTML pages into
> the
> > > database for easier changes by end users. For this the users could use
> > > a WYSIWIG MarkDown editor, and just cut and past the MarkDown into Base
> > > forms when finished.  This will be converted to HTML at build time by
> > > Jekyll static site generator or a script.
> > >
> > > So the proposed solution:
> > > 1. Create the database in Postgresql.
> > > 2. Link Base or other tool to it and design input forms where necessary
> > >
> > > 3. Enter the data through Base into PG including images, MarkDown /
> HTML
> > > text, long descriptions and so on.
> > > 3a. If I don't get a suitable CMS going, I could spend some time
> > > developing a Vue/Quasar/Nuxt whatever front end to handle this, in
> > > several months time.
> > >
> > > 4. Pull the data from Postgres using Python (Psycopg2 will handle
> > > images). Or a node.js app once my JS skills improve.
> > > 4A: optionally use PostgREST, Postgraphile, Pytone Graphene or other to
> > > create an externally accessible API, and then use Python or javascript
> > > module to pull the data out.
> > >
> > > 5. This program will then write the JSON product file t

Re: Table inheritance over schema boundaries possible

2019-05-23 Thread Achilleas Mantzios

On 23/5/19 8:46 π.μ., Thiemo Kellner wrote:

Hi all

I am wondering if table inheritance is possible over the boundaries of schemata 
and different owners.

I have database act with schemata il and scd. When I issue as IL

create table IL.INHERITANCE_TEST() inherits (SCD.TEMPL_BK);

I get

[Code: 0, SQL State: 42501]  ERROR: permission denied for schema scd

Even though I granted all privileges to IL.

What am I missing or is it just not possible what I want to do?

as of 10 (same with 11) you (IL) must be the owner of relation SCD.TEMPL_BK .
Create your tables with a user who has correct privileges on both 
tables/schemas.



Kind regards

Thiemo




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Table inheritance over schema boundaries possible

2019-05-23 Thread Thiemo Kellner


Quoting Achilleas Mantzios :


as of 10 (same with 11) you (IL) must be the owner of relation SCD.TEMPL_BK .
Create your tables with a user who has correct privileges on both  
tables/schemas.


Not what I hope for but was afraid of. thank you for the answer.


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B


smime.p7s
Description: S/MIME Signature


Re: Refresh Publication takes hours and doesn´t finish

2019-05-23 Thread Peter Eisentraut
On 2019-05-20 23:30, Tom Lane wrote:
> Hmm ... given that pg_get_publication_tables() shouldn't return any
> duplicate OIDs, it does seem unnecessarily inefficient to put it in
> an IN-subselect condition.  Peter, is there a reason why this isn't
> a straight lateral join?  I get a much saner-looking plan from
> 
> FROM pg_publication P, pg_class C
> -JOIN pg_namespace N ON (N.oid = C.relnamespace)
> -   WHERE C.oid IN (SELECT relid FROM pg_get_publication_tables(P.pubname));
> +JOIN pg_namespace N ON (N.oid = C.relnamespace),
> +LATERAL pg_get_publication_tables(P.pubname)
> +   WHERE C.oid = pg_get_publication_tables.relid;

No reason I think, just didn't quite manage to recognize the possibility
of using LATERAL at the time.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Data entry / data editing tools (more end-user focus).

2019-05-23 Thread bret_st...@machinemanagement.com
Not exactly sure how much coding you are interested in doing.I use access and 
Libre Base to prototype data entry from time to time.Been pretty happy with 
Lazarus lately. A free Delphi/Pascal IDE, that runs x-platform.You can put 
stuff together pretty quickly.Best
 Original message From: Tony Shelver  Date: 
5/23/19  2:52 AM  (GMT-08:00) To: Stefan Keller  Cc: 
pgsql-general  Subject: Re: Data entry / data 
editing tools (more end-user focus). 
I looked at quite a few options.   Some constraints on my side that our 
direction is open source, with Linux development and servers.Radzen is .NET:  I 
could just as well use MS Access to cobble together a front end.
CUBA and OpenXava are Java based and seem to require writing Java for logic: I 
last used nearly 20 years ago and 'fast' development and Java IMHO is an 
oxymoron.
Aurelia looks a bit promising, but I am not sure if it gains anything over the 
current crop of JS libraries and frameworks, such as Vue, React et al, and then 
libraries / frameworks built on top of those such as  Nuxt / Vue, Quasar / Vue 
or Vuetify / Vue, which seem to have far more activity on Github.
I managed to set up a quick and dirty front end using LibreOffice Base over a 
weekend, next iteration i will probably move over to a Vue framework, probably 
using Quasar..

On Sat, 18 May 2019 at 00:26, Stefan Keller  wrote:
Dear all



What about following „Rapid App Development Tools"?

* OpenXava (Java): https://www.openxava.org/ate/visual-studio-lightswitch

* Radzen (.NET): https://www.radzen.com/visual-studio-lightswitch-alternative/

* Other: https://aurelia.io/ (JS) or CUBA https://www.cuba-platform.com/ (Java)



:Stefan



Am Do., 28. März 2019 um 15:39 Uhr schrieb Adrian Klaver

:

>

> On 3/27/19 11:49 PM, Tony Shelver wrote:

>

> Please reply to list also, more eyes on the the problem.

> Ccing list

>

> My take on below is since you are feeding a Website why not use Web

> technologies for your data entry. My language of choice is Python. I

> have done something similar to this(on small scale) using the Django

> framework. For something lighter weight there is Flask. Then your client

> becomes a browser and you do not have to distribute forms around. You

> could integrate with the existing Web apps you are using e.g. SnipCart.

>

>

> > Actually I found a possibility.  LibreOffice Base on top of PG lets me

> > paste photos into a Postgresql bytea field no problem.  MS Access should

> > work well also, but I am not going to buy it, and running Ubuntu most of

> > the time.

> > Possibly will distribute the Base forms to select users to enter data.

> > We are a startup company, so this is an affordable temporary fix, until

> > the product I have been looking at matures, or we can roll our own.

> >

> > We are building a company website, including an eStore, and have a few

> > hundred products to load and maintain. Our product data currently isn't

> > suitable for a sales catalog.

> > (Brands, categories, products, pricing and deal info, specials, images,

> > product comparisons and so on).

> >

> > Right now I input / maintain this via CSV files maintained through a

> > spreadsheet  (LibreOffice Calc) which our site generator (Jekyll) uses

> > to build out the static HTML product [pages automatically.

> > This is really quick to enter basic data, but I have to manually

> > maintain image uploads, image names and so on manually in the

> > spreadsheet and through individual file uploads. We have at least one,

> > preferably 3 and up to 6 photos per product to maintain.  Call it a 1000

> > images right now, and that will only go up.

> > Invalid text / characters in product descriptions and so on can break

> > the CSV as well.

> >

> > There are headless CMS solutions out on the market targeting this same

> > area, but for various reasons the suitable ones are still maturing and

> > shaking out in the marketplace, so I am not in a hurry to make a choice.

> >

> > So the goal is to replace CSV with JSON file input.  This will also make

> > my life easier for more complex structures such as multiple categories

> > and specs per product.

> > I also want to migrate text that can change from the HTML pages into the

> > database for easier changes by end users. For this the users could use

> > a WYSIWIG MarkDown editor, and just cut and past the MarkDown into Base

> > forms when finished.  This will be converted to HTML at build time by

> > Jekyll static site generator or a script.

> >

> > So the proposed solution:

> > 1. Create the database in Postgresql.

> > 2. Link Base or other tool to it and design input forms where necessary

> >

> > 3. Enter the data through Base into PG including images, MarkDown / HTML

> > text, long descriptions and so on.

> > 3a. If I don't get a suitable CMS going, I could spend some time

> > developing a Vue/Quasar/Nuxt whatever front end to handle this, in

> > several months time.

> >

> > 4

Re: Data entry / data editing tools (more end-user focus).

2019-05-23 Thread Frank Alberto Rodriguez
If you have the database modeled, the most quickly think I can thinks
is with python framework Django. Configure the connection to the DB and
make reverse engineer with Django, this create the entities class,
then  activate the administration forms and configure each form for the
entities (few lines) and Django makes the magic and makes the GUI for
the DB and if the entities are related this give you the option to
insert before the entities needed. As a plus, you have the access
control module done too.
Regards 
On Thu, 2019-05-23 at 11:52 +0200, Tony Shelver wrote:
> I looked at quite a few options.   Some constraints on my side that
> our direction is open source, with Linux development and servers.
> Radzen is .NET:  I could just as well use MS Access to cobble
> together a front end.
> 
> CUBA and OpenXava are Java based and seem to require writing Java for
> logic: I last used nearly 20 years ago and 'fast' development and
> Java IMHO is an oxymoron.
> 
> Aurelia looks a bit promising, but I am not sure if it gains anything
> over the current crop of JS libraries and frameworks, such as Vue,
> React et al, and then libraries / frameworks built on top of those
> such as  Nuxt / Vue, Quasar / Vue or Vuetify / Vue, which seem to
> have far more activity on Github.
> 
> I managed to set up a quick and dirty front end using LibreOffice
> Base over a weekend, next iteration i will probably move over to a
> Vue framework, probably using Quasar..
> 
> On Sat, 18 May 2019 at 00:26, Stefan Keller 
> wrote:
> > Dear all
> > 
> > 
> > 
> > What about following „Rapid App Development Tools"?
> > 
> > * OpenXava (Java): 
> > https://www.openxava.org/ate/visual-studio-lightswitch
> > 
> > * Radzen (.NET): 
> > https://www.radzen.com/visual-studio-lightswitch-alternative/
> > 
> > * Other: https://aurelia.io/ (JS) or CUBA 
> > https://www.cuba-platform.com/ (Java)
> > 
> > 
> > 
> > :Stefan
> > 
> > 
> > 
> > Am Do., 28. März 2019 um 15:39 Uhr schrieb Adrian Klaver
> > 
> > :
> > 
> > >
> > 
> > > On 3/27/19 11:49 PM, Tony Shelver wrote:
> > 
> > >
> > 
> > > Please reply to list also, more eyes on the the problem.
> > 
> > > Ccing list
> > 
> > >
> > 
> > > My take on below is since you are feeding a Website why not use
> > Web
> > 
> > > technologies for your data entry. My language of choice is
> > Python. I
> > 
> > > have done something similar to this(on small scale) using the
> > Django
> > 
> > > framework. For something lighter weight there is Flask. Then your
> > client
> > 
> > > becomes a browser and you do not have to distribute forms around.
> > You
> > 
> > > could integrate with the existing Web apps you are using e.g.
> > SnipCart.
> > 
> > >
> > 
> > >
> > 
> > > > Actually I found a possibility.  LibreOffice Base on top of PG
> > lets me
> > 
> > > > paste photos into a Postgresql bytea field no problem.  MS
> > Access should
> > 
> > > > work well also, but I am not going to buy it, and running
> > Ubuntu most of
> > 
> > > > the time.
> > 
> > > > Possibly will distribute the Base forms to select users to
> > enter data.
> > 
> > > > We are a startup company, so this is an affordable temporary
> > fix, until
> > 
> > > > the product I have been looking at matures, or we can roll our
> > own.
> > 
> > > >
> > 
> > > > We are building a company website, including an eStore, and
> > have a few
> > 
> > > > hundred products to load and maintain. Our product data
> > currently isn't
> > 
> > > > suitable for a sales catalog.
> > 
> > > > (Brands, categories, products, pricing and deal info, specials,
> > images,
> > 
> > > > product comparisons and so on).
> > 
> > > >
> > 
> > > > Right now I input / maintain this via CSV files maintained
> > through a
> > 
> > > > spreadsheet  (LibreOffice Calc) which our site generator
> > (Jekyll) uses
> > 
> > > > to build out the static HTML product [pages automatically.
> > 
> > > > This is really quick to enter basic data, but I have to
> > manually
> > 
> > > > maintain image uploads, image names and so on manually in the
> > 
> > > > spreadsheet and through individual file uploads. We have at
> > least one,
> > 
> > > > preferably 3 and up to 6 photos per product to maintain.  Call
> > it a 1000
> > 
> > > > images right now, and that will only go up.
> > 
> > > > Invalid text / characters in product descriptions and so on can
> > break
> > 
> > > > the CSV as well.
> > 
> > > >
> > 
> > > > There are headless CMS solutions out on the market targeting
> > this same
> > 
> > > > area, but for various reasons the suitable ones are still
> > maturing and
> > 
> > > > shaking out in the marketplace, so I am not in a hurry to make
> > a choice.
> > 
> > > >
> > 
> > > > So the goal is to replace CSV with JSON file input.  This will
> > also make
> > 
> > > > my life easier for more complex structures such as multiple
> > categories
> > 
> > > > and specs per product.
> > 
> > > > I also want to migrate text that can change from the HTML pages
> > into the
> > 
> 

Re: Data entry / data editing tools (more end-user focus).

2019-05-23 Thread Rich Shepard

On Thu, 23 May 2019, Frank Alberto Rodriguez wrote:


If you have the database modeled, the most quickly think I can thinks is
with python framework Django. Configure the connection to the DB and make
reverse engineer with Django, this create the entities class, then
activate the administration forms and configure each form for the entities
(few lines) and Django makes the magic and makes the GUI for the DB and if
the entities are related this give you the option to insert before the
entities needed. As a plus, you have the access control module done too.


Frank, et al.:

Django is great if you want a browser UI for a web-based application. If you
want a desktop application the Django alternative could be Python3 +
psycopg2 + SQLAlchemy using the tkinter UI library which comes packaged with
Python.

Rich




Re: terminating walsender process due to replication timeout

2019-05-23 Thread AYahorau
Hello Everyone!

I can simplify and describe the issue I faced.
I have 2 nodes in db cluster: master and standby.
I create a simple table on master node by a command via psql:
CREATE TABLE table1 (a INTEGER);
After this I fill the table by COPY command from a file which  contains 
200 (2 million) entries.

And in case when I run for example such a command:
UPDATE table1 SET a='1' 
or such a command:
DELETE FROM table1;
I see in PostgreSQL log the an entry: terminating walsender process due to 
replication timeout.

I suppose that this issue caused by small value of wal_sender_timeout=1s 
and long runtime of the queries (it takes about 15 seconds).

What is the best way to proceed it? How to avoid this? Is there any 
additional configuration which can help here?

Regards, 
Andrei 



From:   Andrei Yahorau/IBA
To: Kyotaro HORIGUCHI , 
Cc: pgsql-gene...@postgresql.org, rene.romer...@gmail.com
Date:   17/05/2019 11:04
Subject:Re: terminating walsender process due to replication 
timeout


Hello.

Thanks for the answer.

Can frequent database operations cause getting a standby server behind? Is 
there a way to avoid this situation?
I checked that walsender works well in my test  if I set 
wal_sender_timeout at least to 5 second.

Best regards, 
Andrei Yahorau




From:   Kyotaro HORIGUCHI 
To: ayaho...@ibagroup.eu, 
Cc: rene.romer...@gmail.com, pgsql-gene...@postgresql.org
Date:   16/05/2019 10:36
Subject:Re: terminating walsender process due to replication 
timeout



Hello.

At Wed, 15 May 2019 10:04:12 +0300, ayaho...@ibagroup.eu wrote in 

> Hello,
> Thank You for the response.
> 
> Yes that's possible to monitor replication delay. But my questions were 
> not about monitoring network issues. 
> 
> I use exactly wal_sender_timeout=1s because it allows to detect 
> replication problems quickly.

Though I don't have an exact idea of your configuration, it seems
to me that your standby is simply getting behind more than one
second from the master. If you regard the fact as a problem of
replication, the configuration can be said to be finding the
problem correctly.

Since the keep-alive packet is sent in-band, it doesn't get to
the standby before already-sent-but-not-processed packets.

> So, I need clarification to the following  questions:
> Is  it possible to use exactly this configuration and be sure that it 
will 
> be work properly.
> What did I do wrong? Should I correct my configuration somehow?
> Is this the same issue  as mentioned here: 
> 
https://www.postgresql.org/message-id/e082a56a-fd95-a250-3bae-0fff93832...@2ndquadrant.com
 

> ? If it is so, why I do I face this problem again?

It is not the same "problem". What was mentioned there is fast
network making the sender-side loop busy, which prevents
keepalive packet from sending.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




SERIALs and wraparound

2019-05-23 Thread Arnold Hendriks
Hi List ... I'm investigating how to port our data and applicationsfrom our
in-house developed database server to PostgreSQL.

One of the challenges I'm running into is in replicating an 'autonumber'
column type.. we're used to having a primary key that will autogenerate a
31bit integer that will automatically wraparound back to its starting
position once it has reached MAXINT, and automatically 'skips' over any IDs
that are already in use (even if in uncommited transactions)

Postgresql's SERIAL (and the underlying SEQUENCE stuff) is comparable .. it
can be set up to wraparound once it hits the 31-bit INTMAX but from
everything i've read it has no option to skip over any IDs that are already
in use - so after the first wraparound occurs, transactions risk failing
over the unique constraint on the primary key.

I've checked stackoverflow and other forums - as far as I can tell there is
no 'easy' fix yet. Setting a 'red line' for the sequence and renumbering
the primary key and references once you hit it seems to be the best known
solution without requiring changes from downstream/api users (as extending
to 64bit or using UUIDs would)

So my questions are:
- Is my description accurate - eg there is no 'standard' solution yet? (I
haven't missed any SO article?)

- Has someone already attempted to work around this by fixing/providing an
alternative to nextval() which would work around this?
(our own database 'solved' this by looking directly at the btree index for
the primary key to find and skip any 'in use' value, even if they wouldn't
be committed/visible yet - and storing the 'next' value in a global mutex
protected variable. but I presume postgresql's architecture wouldn't make
it that easy, or it would have already been implemented)

With regards,
Arnold


Re: Strange performance degregation in sql function (PG11.1)

2019-05-23 Thread Alastair McKinley
Hi Andrew,

Thanks for your in-depth response.  I found that adding the stable qualifier 
didn't solve the issue unfortunately.  I actually encountered the same issue 
(or at least extremely similar) today and made a bit more progress on defining 
it.

I have a carefully indexed and optimised query that runs in about 2 seconds 
with this structure and signature.

create or replace function example_function(param1 int, param2 int) returns 
setof custom_type as
$$
with a_few_ctes ()
select * from big_table where col1 = param1 and col2 = param2;
$$ language sql stable;

This function works as expected when executed from a psql client.  I am calling 
this function via another framework (Postgrest) that executes the function 
using the following pattern:

with args as (
select json_to_record($1) as (param1 int,param2 int)
),
output as (
select *
from example_function(
param1 := (select param1 from args),
param2 := (select param2 from args)
)
)
select * from output;

Running this query with the args coming from the CTE resulted in my query 
running for many tens of minutes before I gave up.  Changing the underlying 
function to use plpgsql fixes the issue and the query runs as expected.  Both 
versions work as expected when called directly, but the SQL version does not 
when called with the args coming from the CTE as shown above.

The new function signature is

create or replace function example_function(param1 int, param2 int) returns 
setof custom_type as
$$
with a_few_ctes ()
return query select * from big_table where col1 = param1 and col2 = 
param2;
$$ language plpgsql stable;

I haven't been able to check the bad query plan with auto_explain as the query 
doesn't seem to finish.

So to summarise, changing a stable SQL function to a stable plpgsql function 
when called with function args from a CTE fixes a huge performance issue of 
uncertain origin.  I hope someone can offer some suggestions as this has really 
confused me.

Best regards,

Alastair

From: Andrew Gierth 
Sent: 19 May 2019 03:48
To: Alastair McKinley
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Strange performance degregation in sql function (PG11.1)

> "Alastair" == Alastair McKinley  writes:

 Alastair> Hi all,

 Alastair> I recently experienced a performance degradation in an
 Alastair> operational system that I can't explain. I had a function
 Alastair> wrapper for a aggregate query that was performing well using
 Alastair> the expected indexes with the approximate structure as shown
 Alastair> below.

 Alastair> create or replace function example_function(param1 int, param2 int) 
returns setof custom_type as
 Alastair> $$
 Alastair> select * from big_table where col1 = param1 and col2 = param2;
 Alastair> $$ language sql;

This function isn't inlinable due to missing a STABLE qualifier; that's
a pretty big issue.

Without inlining, the function will be run only with generic plans,
which means that the decision about index usage will be made without
knowledge of the parameter values.

Was your actual function inlinable? See
https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions

You can get the query plan of a non-inlined function using the
auto_explain module (with its log_nested_statements option). The query
plan of non-inlined function calls is not otherwise shown by EXPLAIN.

 Alastair> After creating two new indexes on this table to support a
 Alastair> different use case during a migration, this unchanged
 Alastair> function reduced in performance by several orders of
 Alastair> magnitude. Running the query inside the function manually on
 Alastair> the console however worked as expected and the query plan did
 Alastair> not appear to have changed.

But when you run it manually, you'll get a custom plan, based on the
parameter values.

 Alastair> On a hunch I changed the structure of the function to the
 Alastair> structure below and immediately the query performance
 Alastair> returned to the expected baseline.

 Alastair> create or replace function example_function(param1 int, param2 int) 
returns setof custom_type as
 Alastair> $$
 Alastair> BEGIN
 Alastair> return query execute format($query$
 Alastair> select * from big_table where col1 = %1$L and col2 = %1$
 Alastair> $query$,param1,param2);
 Alastair> END;
 Alastair> $$ language plpgsql;

Using EXECUTE in plpgsql will get you a custom plan every time (though
you really should have used EXECUTE USING rather than interpolating the
parameters into the query string).

I suggest looking into the inlining question first.

--
Andrew (irc:RhodiumToad)


Re: Strange performance degregation in sql function (PG11.1)

2019-05-23 Thread Pavel Stehule
čt 23. 5. 2019 v 23:38 odesílatel Alastair McKinley <
a.mckin...@analyticsengines.com> napsal:

> Hi Andrew,
>
> Thanks for your in-depth response.  I found that adding the stable
> qualifier didn't solve the issue unfortunately.  I actually encountered the
> same issue (or at least extremely similar) today and made a bit more
> progress on defining it.
>
> I have a carefully indexed and optimised query that runs in about 2
> seconds with this structure and signature.
>
> create or replace function example_function(param1 int, param2 int)
> returns setof custom_type as
> $$
> with a_few_ctes ()
> select * from big_table where col1 = param1 and col2 = param2;
> $$ language sql stable;
>
> This function works as expected when executed from a psql client.  I am
> calling this function via another framework (Postgrest) that executes the
> function using the following pattern:
>
> with args as (
> select json_to_record($1) as (param1 int,param2 int)
> ),
> output as (
> select *
> from example_function(
> param1 := (select param1 from args),
> param2 := (select param2 from args)
> )
> )
> select * from output;
>
> Running this query with the args coming from the CTE resulted in my query
> running for many tens of minutes before I gave up.  Changing the underlying
> function to use plpgsql fixes the issue and the query runs as expected.
> Both versions work as expected when called directly, but the SQL version
> does not when called with the args coming from the CTE as shown above.
>
> The new function signature is
>
> create or replace function example_function(param1 int, param2 int)
> returns setof custom_type as
> $$
> with a_few_ctes ()
> return query select * from big_table where col1 = param1 and col2
> = param2;
> $$ language plpgsql stable;
>
> I haven't been able to check the bad query plan with auto_explain as the
> query doesn't seem to finish.
>
> So to summarise, changing a stable SQL function to a stable plpgsql
> function when called with function args from a CTE fixes a huge performance
> issue of uncertain origin.  I hope someone can offer some suggestions as
> this has really confused me.
>

SQL functions are fast when they are inlined, but when are not inlined,
then they are significantly slower than plpgsql.

I am not sure, long time I didn't see this code. If I remember well, SQL
functions doesn't cache plans - so creates and lost plans every time.



> Best regards,
>
> Alastair
> --
> *From:* Andrew Gierth 
> *Sent:* 19 May 2019 03:48
> *To:* Alastair McKinley
> *Cc:* pgsql-general@lists.postgresql.org
> *Subject:* Re: Strange performance degregation in sql function (PG11.1)
>
> > "Alastair" == Alastair McKinley 
> writes:
>
>  Alastair> Hi all,
>
>  Alastair> I recently experienced a performance degradation in an
>  Alastair> operational system that I can't explain. I had a function
>  Alastair> wrapper for a aggregate query that was performing well using
>  Alastair> the expected indexes with the approximate structure as shown
>  Alastair> below.
>
>  Alastair> create or replace function example_function(param1 int, param2
> int) returns setof custom_type as
>  Alastair> $$
>  Alastair> select * from big_table where col1 = param1 and col2 =
> param2;
>  Alastair> $$ language sql;
>
> This function isn't inlinable due to missing a STABLE qualifier; that's
> a pretty big issue.
>
> Without inlining, the function will be run only with generic plans,
> which means that the decision about index usage will be made without
> knowledge of the parameter values.
>
> Was your actual function inlinable? See
> https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions
>
> You can get the query plan of a non-inlined function using the
> auto_explain module (with its log_nested_statements option). The query
> plan of non-inlined function calls is not otherwise shown by EXPLAIN.
>
>  Alastair> After creating two new indexes on this table to support a
>  Alastair> different use case during a migration, this unchanged
>  Alastair> function reduced in performance by several orders of
>  Alastair> magnitude. Running the query inside the function manually on
>  Alastair> the console however worked as expected and the query plan did
>  Alastair> not appear to have changed.
>
> But when you run it manually, you'll get a custom plan, based on the
> parameter values.
>
>  Alastair> On a hunch I changed the structure of the function to the
>  Alastair> structure below and immediately the query performance
>  Alastair> returned to the expected baseline.
>
>  Alastair> create or replace function example_function(param1 int, param2
> int) returns setof custom_type as
>  Alastair> $$
>  Alastair> BEGIN
>  Alastair> return query execute format($query$
>  Alastair> select * from big_table where col1 = %1$L and col2 = %1$
>  Alastair> $qu

Re: terminating walsender process due to replication timeout

2019-05-23 Thread Achilleas Mantzios

On 23/5/19 5:05 μ.μ., ayaho...@ibagroup.eu wrote:

Hello Everyone!

I can simplify and describe the issue I faced.
I have 2 nodes in db cluster: master and standby.
I create a simple table on master node by a command via psql:
CREATE TABLE table1 (a INTEGER);
After this I fill the table by COPY command from a file which  contains 200 
(2 million) entries.

And in case when I run for example such a command:
UPDATE table1 SET a='1'
or such a command:
DELETE FROM table1;
I see in PostgreSQL log the an entry: terminating walsender process due to 
replication timeout.

I suppose that this issue caused by small value of wal_sender_timeout=1s and 
long runtime of the queries (it takes about 15 seconds).

What is the best way to proceed it? How to avoid this? Is there any additional 
configuration which can help here?

I have set mine to 15min. No problems for over 7 months, knock on wood.



Regards,
Andrei



From: Andrei Yahorau/IBA
To: Kyotaro HORIGUCHI ,
Cc: pgsql-gene...@postgresql.org, rene.romer...@gmail.com
Date: 17/05/2019 11:04
Subject: Re: terminating walsender process due to replication timeout



Hello.

Thanks for the answer.

Can frequent database operations cause getting a standby server behind? Is 
there a way to avoid this situation?
I checked that walsender works well in my test  if I set wal_sender_timeout at 
least to 5 second.

Best regards,
Andrei Yahorau




From: Kyotaro HORIGUCHI 
To: ayaho...@ibagroup.eu,
Cc: rene.romer...@gmail.com, pgsql-gene...@postgresql.org
Date: 16/05/2019 10:36
Subject: Re: terminating walsender process due to replication timeout




Hello.

At Wed, 15 May 2019 10:04:12 +0300, ayaho...@ibagroup.eu wrote in 

> Hello,
> Thank You for the response.
>
> Yes that's possible to monitor replication delay. But my questions were
> not about monitoring network issues.
>
> I use exactly wal_sender_timeout=1s because it allows to detect
> replication problems quickly.

Though I don't have an exact idea of your configuration, it seems
to me that your standby is simply getting behind more than one
second from the master. If you regard the fact as a problem of
replication, the configuration can be said to be finding the
problem correctly.

Since the keep-alive packet is sent in-band, it doesn't get to
the standby before already-sent-but-not-processed packets.

> So, I need clarification to the following  questions:
> Is  it possible to use exactly this configuration and be sure that it will
> be work properly.
> What did I do wrong? Should I correct my configuration somehow?
> Is this the same issue  as mentioned here:
> 
https://www.postgresql.org/message-id/e082a56a-fd95-a250-3bae-0fff93832...@2ndquadrant.com
> ? If it is so, why I do I face this problem again?

It is not the same "problem". What was mentioned there is fast
network making the sender-side loop busy, which prevents
keepalive packet from sending.

regards.

--
Kyotaro Horiguchi
NTT Open Source Software Center





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: terminating walsender process due to replication timeout

2019-05-23 Thread Kyotaro HORIGUCHI
Hello.

At Fri, 17 May 2019 11:04:58 +0300, ayaho...@ibagroup.eu wrote in 

> Can frequent database operations cause getting a standby server behind? Is 
> there a way to avoid this situation?
> I checked that walsender works well in my test  if I set 
> wal_sender_timeout at least to 5 second.

It depends on the transaction (WAL) traffic and the bandwidth of
your network. The performacne difference between master and
standby also affects.

The possibilities I can guess are:

- The bandwidth is narrow to the traffic.

- The standby performs poorer than the master.

- Your network is having a sort of trouble. Virtual network
  (local network in a virtual environment) tends to suffer
  network troubles caused by CPU saturation or something else.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center





POSTGRES_FSM_RELATIONS CRITICAL: DB control fsm relations used: 79569 of 80000 (99%)

2019-05-23 Thread Julie Nishimura
Hello,
We have an issue with fsm_relations utilization reaching 99%, I was able to 
vacuum a handful of tables, but it wasn't enough to make a noticeable 
difference. I think at this point we will need to increase the number of 
fsm_relations from 80,000 to 100,000 which will require a restart. Because 
there aren't any more dead rows to delete. I confirmed this by connecting to 
each db and running the following query:
SELECT relname, n_live_tup, n_dead_tup from pg_stat_user_tables order by 
n_dead_tup desc
daily_mail_shared_state_cddt_3588-
relname | n_live_tup | n_dead_tup
++
article_errors | 0 | 0
article_names | 3375193 | 0
indexdefs | 0 | 0
tabledefs | 0 | 0

Above output, shows n_dead_tup is zeroed out, this makes me believe that we 
need to increase the number of fsm relations to a number between 90k and 
100k.But I might be wrong, need your advice.

PostgreSQL 8.2.15 (Greenplum Database 4.3.8.1 build 1)

Thanks!