Use of perl modules in plperl ?

2020-03-04 Thread stan
I am trying to examine the values of $_TD->{new}. I thought the easiest way
to see what this structure looked like was to use the Dumper functionality.
To do so I need to include the appropriate Perl module, which I would think
would be done like this:

use Data::Dumper qw(Dumper);

But inserting that into the script to create the function, like this:

CREATE OR REPLACE FUNCTION valid_id() RETURNS trigger AS $$
use Data::Dumper qw(Dumper);
if (($_TD->{new}{i} >= 100) || ($_TD->{new}{i} <= 0)) {
..

Returned this error message:

ERROR:  Unable to load Data/Dumper.pm into plperl at line 2.
BEGIN failed--compilation aborted at line 2.
CONTEXT:  compilation of PL/Perl function "valid_id"

What do I have to do to use Perl modules in plperl function/

BTW this works outside plperl

#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper qw(Dumper);

..

Thanks for the help.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Use of perl modules in plperl ?

2020-03-04 Thread hubert depesz lubaczewski
On Wed, Mar 04, 2020 at 06:03:22AM -0500, stan wrote:
> I am trying to examine the values of $_TD->{new}. I thought the easiest way
> to see what this structure looked like was to use the Dumper functionality.
> To do so I need to include the appropriate Perl module, which I would think
> would be done like this:
> use Data::Dumper qw(Dumper);

plperl is trusted, and you can't use "use" in it.

More on this:
https://www.postgresql.org/docs/current/plperl-trusted.html

You have two options:
1. Switch to pl/PerlU
2. set plperl.on_init to whatever "use" you need, examples:
   
https://www.postgresql.org/docs/current/plperl-under-the-hood.html#PLPERL-CONFIG

Best regards,

depesz





Re: Use of perl modules in plperl ?

2020-03-04 Thread stan
On Wed, Mar 04, 2020 at 06:03:22AM -0500, stan wrote:
> I am trying to examine the values of $_TD->{new}. I thought the easiest way
> to see what this structure looked like was to use the Dumper functionality.
> To do so I need to include the appropriate Perl module, which I would think
> would be done like this:
> 
> use Data::Dumper qw(Dumper);
> 
> But inserting that into the script to create the function, like this:
> 
> CREATE OR REPLACE FUNCTION valid_id() RETURNS trigger AS $$
> use Data::Dumper qw(Dumper);
> if (($_TD->{new}{i} >= 100) || ($_TD->{new}{i} <= 0)) {
>   ..
> 
> Returned this error message:
> 
> ERROR:  Unable to load Data/Dumper.pm into plperl at line 2.
> BEGIN failed--compilation aborted at line 2.
> CONTEXT:  compilation of PL/Perl function "valid_id"
> 
> What do I have to do to use Perl modules in plperl function/
> 
> BTW this works outside plperl
> 
> #!/usr/bin/perl
> use strict;
> use warnings;
> use Data::Dumper qw(Dumper);
> 
> ..
> 
> Thanks for the help.
> 

Found plperlu in the docs. 

But some syntax help on referencing all the values returned by $_TD->{new}
would be helpful.
-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Calling a function from a rule?

2020-03-04 Thread stan
I am missing something about how to properly create a rule.

Thanks to the helpful folks on this list, I am looking a creating some
update able views. So, looks like I need to create a rule and a function
for this. Here is what I am trying as a test.



DROP TRIGGER v_trig_test ON test;

CREATE OR REPLACE FUNCTION v_trig_test() RETURNS trigger AS $$
use 5.010;
use strict;
use warnings;
use Data::Dumper qw(Dumper);

my $new =  Dumper $_TD->{new};
my $old =  Dumper $_TD->{old};

elog(NOTICE, "old = $old" );
elog(NOTICE, "new = $new" );

return;

$$ LANGUAGE plperlu;


CREATE RULE "_RETURN" AS
ON UPDATE TO purchase_view
DO INSTEAD
SELECT * FROM v_trig_test;


But the select line in the create rule seems to be a syntax error.

Here is my thinking. I have done functions called from triggers, and I am
modeling this after that experience, so I think I need the rule to call the
function, correct?
-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Calling a function from a rule?

2020-03-04 Thread stan
On Wed, Mar 04, 2020 at 08:05:06AM -0500, stan wrote:
> I am missing something about how to properly create a rule.
> 
> Thanks to the helpful folks on this list, I am looking a creating some
> update able views. So, looks like I need to create a rule and a function
> for this. Here is what I am trying as a test.
> 
> 
> 
> DROP TRIGGER v_trig_test ON test;
> 
> CREATE OR REPLACE FUNCTION v_trig_test() RETURNS trigger AS $$
> use 5.010;
> use strict;
> use warnings;
> use Data::Dumper qw(Dumper);
> 
> my $new =  Dumper $_TD->{new};
> my $old =  Dumper $_TD->{old};
> 
> elog(NOTICE, "old = $old" );
> elog(NOTICE, "new = $new" );
> 
> return;
> 
> $$ LANGUAGE plperlu;
> 
> 
> CREATE RULE "_RETURN" AS
> ON UPDATE TO purchase_view
> DO INSTEAD
> SELECT * FROM v_trig_test;
> 
> 
> But the select line in the create rule seems to be a syntax error.
> 
> Here is my thinking. I have done functions called from triggers, and I am
> modeling this after that experience, so I think I need the rule to call the
> function, correct?

Looks like the issue has to do with defining the return type of the
function. I corrected the SELECT in the rule to:

SELECT * FROM v_trig_test()

Whihc got me as far as a complaint about the return type of the function,
so now the parser understands that I am trying to call a function. What
should my function return?

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




pg_dump and public schema

2020-03-04 Thread Олег Самойлов
Hi all. PostgresQL 12.1. Strange behaviour with pg_dump and absent public 
schema.

I droped public schema and I work under "username" schema.

=> \dn
List of schemas
 Name  | Owner
---+---
 olleg | olleg
(1 row)

Dump now

pg_dump -U postgres -C olleg >dump.sql

--
-- PostgreSQL database dump
--

-- Dumped from database version 12.1
-- Dumped by pg_dump version 12.1

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: olleg; Type: DATABASE; Schema: -; Owner: olleg
--

CREATE DATABASE olleg WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 
'ru_RU.UTF-8' LC_CTYPE = 'ru_RU.UTF-8';


ALTER DATABASE olleg OWNER TO olleg;

\connect olleg

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: olleg; Type: SCHEMA; Schema: -; Owner: olleg
--

CREATE SCHEMA olleg;


ALTER SCHEMA olleg OWNER TO olleg;

--
-- PostgreSQL database dump complete
--

recreate DB from the dump:

psql postgres postgres -f dump.sql

And now I see public schema, which must be absent.

psql olleg olleg

=> \dn
  List of schemas
  Name  |  Owner
+--
 olleg  | olleg
 public | postgres
(2 rows)



Re: pg_dump and public schema

2020-03-04 Thread Paul Foerster
Hi,

I think "create database" always creates the "public" schema. So, all
is well. All you have to do is drop it after running the dump.sql
script.

Cheers,
Paul

On Wed, Mar 4, 2020 at 2:43 PM Олег Самойлов  wrote:
>
> Hi all. PostgresQL 12.1. Strange behaviour with pg_dump and absent public 
> schema.
>
> I droped public schema and I work under "username" schema.
>
> => \dn
> List of schemas
>  Name  | Owner
> ---+---
>  olleg | olleg
> (1 row)
>
> Dump now
>
> pg_dump -U postgres -C olleg >dump.sql
>
> --
> -- PostgreSQL database dump
> --
>
> -- Dumped from database version 12.1
> -- Dumped by pg_dump version 12.1
>
> SET statement_timeout = 0;
> SET lock_timeout = 0;
> SET idle_in_transaction_session_timeout = 0;
> SET client_encoding = 'UTF8';
> SET standard_conforming_strings = on;
> SELECT pg_catalog.set_config('search_path', '', false);
> SET check_function_bodies = false;
> SET xmloption = content;
> SET client_min_messages = warning;
> SET row_security = off;
>
> --
> -- Name: olleg; Type: DATABASE; Schema: -; Owner: olleg
> --
>
> CREATE DATABASE olleg WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE 
> = 'ru_RU.UTF-8' LC_CTYPE = 'ru_RU.UTF-8';
>
>
> ALTER DATABASE olleg OWNER TO olleg;
>
> \connect olleg
>
> SET statement_timeout = 0;
> SET lock_timeout = 0;
> SET idle_in_transaction_session_timeout = 0;
> SET client_encoding = 'UTF8';
> SET standard_conforming_strings = on;
> SELECT pg_catalog.set_config('search_path', '', false);
> SET check_function_bodies = false;
> SET xmloption = content;
> SET client_min_messages = warning;
> SET row_security = off;
>
> --
> -- Name: olleg; Type: SCHEMA; Schema: -; Owner: olleg
> --
>
> CREATE SCHEMA olleg;
>
>
> ALTER SCHEMA olleg OWNER TO olleg;
>
> --
> -- PostgreSQL database dump complete
> --
>
> recreate DB from the dump:
>
> psql postgres postgres -f dump.sql
>
> And now I see public schema, which must be absent.
>
> psql olleg olleg
>
> => \dn
>   List of schemas
>   Name  |  Owner
> +--
>  olleg  | olleg
>  public | postgres
> (2 rows)
>




Re: Exportacion por lotes

2020-03-04 Thread Hernan Jesus Gonzalez Carmona
Muchas gracias a todos.

El mar., 3 mar. 2020 a las 21:58, Alvaro Herrera ()
escribió:

> On 2020-Mar-03, Hernan Jesus Gonzalez Carmona wrote:
>
> > Estimados antes que todo me presento, mi nombre es Hernan Gonzalez, me
> > acabo de inscribir en esta lista de correo y desde ya me disculpo si en
> > este mensaje violo alguna normativa de la lista de correo pero necesito
> > ayuda que me apura mucho.
>
> I replied in Spanish copying pgsql-es-ayuda.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: Calling a function from a rule?

2020-03-04 Thread Vik Fearing
On 04/03/2020 14:05, stan wrote:
> I am missing something about how to properly create a rule.

What you are missing is that you should not ever use RULEs.

> Thanks to the helpful folks on this list, I am looking a creating some
> update able views. So, looks like I need to create a rule and a function
> for this.

Use an INSTEAD OF trigger on the view unless it's a simple view in which
case it "just works".
-- 
Vik Fearing




Re: Calling a function from a rule?

2020-03-04 Thread stan
On Wed, Mar 04, 2020 at 03:04:34PM +0100, Vik Fearing wrote:
> On 04/03/2020 14:05, stan wrote:
> > I am missing something about how to properly create a rule.
> 
> What you are missing is that you should not ever use RULEs.
> 
> > Thanks to the helpful folks on this list, I am looking a creating some
> > update able views. So, looks like I need to create a rule and a function
> > for this.
> 
> Use an INSTEAD OF trigger on the view unless it's a simple view in which
> case it "just works".

OK, I certainly believe you have more knowledge than I do here Just let me
make certain what I am trying to do fits that model.

I am trying to create an "update-able view" on a view that dereferences a
number of keys to foreign tables to make the view more "human friendly". so
it is going to need to look up those keys before allowing the insert into
the base table(s).

Can you please explain the difference between triggers and rules? 

I have some experience with using triggers in other situations, but the
documentation for update-able views seemed to point me to rules.

Tanks.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: pg_dump and public schema

2020-03-04 Thread Adrian Klaver

On 3/4/20 5:42 AM, Олег Самойлов wrote:

Hi all. PostgresQL 12.1. Strange behaviour with pg_dump and absent public 
schema.

I droped public schema and I work under "username" schema.

=> \dn
List of schemas
  Name  | Owner
---+---
  olleg | olleg
(1 row)

Dump now

pg_dump -U postgres -C olleg >dump.sql

--
-- PostgreSQL database dump
--

-- Dumped from database version 12.1
-- Dumped by pg_dump version 12.1

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: olleg; Type: DATABASE; Schema: -; Owner: olleg
--

CREATE DATABASE olleg WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 
'ru_RU.UTF-8' LC_CTYPE = 'ru_RU.UTF-8';


ALTER DATABASE olleg OWNER TO olleg;

\connect olleg

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: olleg; Type: SCHEMA; Schema: -; Owner: olleg
--

CREATE SCHEMA olleg;


ALTER SCHEMA olleg OWNER TO olleg;

--
-- PostgreSQL database dump complete
--

recreate DB from the dump:

psql postgres postgres -f dump.sql

And now I see public schema, which must be absent.

psql olleg olleg

=> \dn
   List of schemas
   Name  |  Owner
+--
  olleg  | olleg
  public | postgres
(2 rows)



I believe this is the latest information on public schema handling:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5955d934194c3888f30318209ade71b53d29777f

--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Logical replication lag in seconds

2020-03-04 Thread Klaus Darilion
For the records - with a simple script I hacked a solution which is
purely based on the server.

1. Create a table to track the timestamp of an lsn:

CREATE TABLE lsn2date(
   lsn pg_lsn PRIMARY KEY,
   seen timestamp NOT NULL DEFAULT NOW()
);
CREATE ROLE replication_lag_user WITH LOGIN PASSWORD 'xxx';
GRANT ALL ON TABLE lsn2date TO replication_lag_user;


2. Create a script which populates the table:

# cat /etc/systemd/system/calculate_logical_replication_lag.service
[Unit]
Description=Start and auto restart service

[Install]
WantedBy=multi-user.target

[Service]
ExecStart=/usr/bin/php /path/to/calculate_logical_replication_lag.php
Restart=always
RestartSec=10


# cat calculate_logical_replication_lag.php


Suggestion to reduce COPY command output to csv file

2020-03-04 Thread postggen2020 s
Hi Team,
Thanks a lot all of you, for providing support peoples like me.
Could you please provide a suggestion on COPY command.

Environment:
DB Version:9.5.15
postgis:
Table contain GIS data.

I have a table with GIS data, its around 300MB and 2Lacks+ records. I want
to export all the records to the CSV file. I can able to export the table
data into a CSV file, but the exported CSV file size around 162MB.
While I am trying to opening the excel it is taking a long time and also
while applying vlookup excel is getting hang.

I am using below command :
\copy cities TO '/var/lib/pgsql/cities.csv' (DELIMITER ',', FORMAT 'csv',
HEADER)

I am trying to achieve the following :
1. The exported CSV file should have within 10MB.
2. The excel file should open quickly and able to perform vlookups without
any problems(like hang).

Could you please provide suggestions on below queries:
1. what I am trying to achieve, is the right assumption?.
2. Can we use this command to use for the above use case?.
3. Is there any tool/extension available for the above use case.?

Your inputs are highly appreciated.

Regards,
Postgadm.


Re: Suggestion to reduce COPY command output to csv file

2020-03-04 Thread Adrian Klaver

On 3/4/20 10:38 AM, postggen2020 s wrote:

Hi Team,
Thanks a lot all of you, for providing support peoples like me.
Could you please provide a suggestion on COPY command.

Environment:
DB Version:9.5.15
postgis:
Table contain GIS data.
I have a table with GIS data, its around 300MB and 2Lacks+ records. I 
want to export all the records to the CSV file. I can able to export the 
table data into a CSV file, but the exported CSV file size around 162MB.
While I am trying to opening the excel it is taking a long time and also 
while applying vlookup excel is getting hang.

I am using below command :
\copy cities TO '/var/lib/pgsql/cities.csv' (DELIMITER ',', FORMAT 
'csv', HEADER)

I am trying to achieve the following :
1. The exported CSV file should have within 10MB.


You can select less then the entire table with a query:

https://www.postgresql.org/docs/9.5/sql-copy.html

query

A SELECT or VALUES command whose results are to be copied. Note 
that parentheses are required around the query.



2. The excel file should open quickly and able to perform vlookups 
without any problems(like hang).


That is something you will need to take up with MS.


Could you please provide suggestions on below queries:
1. what I am trying to achieve, is the right assumption?.
2. Can we use this command to use for the above use case?.
3. Is there any tool/extension available for the above use case.?


The MS Office data tools for pulling directly from database into 
spreadsheet via ODBC.


Do you use a programming language?
If so use its libraries to pull data out and into CSV or directly into a 
spreadsheet.




Your inputs are highly appreciated.
Regards,
Postgadm.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgres on macOS 10

2020-03-04 Thread John DeSoi


> On Mar 3, 2020, at 5:06 AM, Nick Renders  wrote:
> 
> In the system.log we see the following 2 lines over and over again, until a 
> user has logged in:
> 
>   Mar  3 09:37:19 postgrestest com.apple.xpc.launchd[1] 
> (com.edb.launchd.postgresql-12[319]): Service exited with abnormal code: 2
>   Mar  3 09:37:19 postgrestest com.apple.xpc.launchd[1] 
> (com.edb.launchd.postgresql-12): Service only ran for 0 seconds. Pushing 
> respawn out by 10 seconds.

To understand the issue, I think you need to look at the Postgres log and see 
if it gives a clue about why it did not start. The startup log might be 
different from the normal Postgres logs depending on how the launch daemon sets 
it up. I assume you will find that at /Library/LaunchDaemons/.

John DeSoi, Ph.D.





Re: Suggestion to reduce COPY command output to csv file

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 11:39 AM postggen2020 s 
wrote:

> Hi Team,
> Thanks a lot all of you, for providing support peoples like me.
> Could you please provide a suggestion on COPY command.
>
> Environment:
> DB Version:9.5.15
> postgis:
> Table contain GIS data.
>
> I have a table with GIS data, its around 300MB and 2Lacks+ records. I want
> to export all the records to the CSV file. I can able to export the table
> data into a CSV file, but the exported CSV file size around 162MB.
> While I am trying to opening the excel it is taking a long time and also
> while applying vlookup excel is getting hang.
>
> I am using below command :
> \copy cities TO '/var/lib/pgsql/cities.csv' (DELIMITER ',', FORMAT 'csv',
> HEADER)
>
> I am trying to achieve the following :
> 1. The exported CSV file should have within 10MB.
>

Then you probably need to redefine your problem and solution.


> 2. The excel file should open quickly and able to perform vlookups without
> any problems(like hang).
>

See #1


>
> Could you please provide suggestions on below queries:
> 1. what I am trying to achieve, is the right assumption?.
>

Not really...


> 2. Can we use this command to use for the above use case?.
>

Not without changing your approach to the problem...in which case \copy may
or may not be a useful tool

3. Is there any tool/extension available for the above use case.?
>

Not sure on an additive suggestion but you probably can get considerable
mileage by removing Excel from the equation and bring the external data
into PostgreSQL and use joins (i.e., SQL's version of VLOOKUP).

David J.


Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Rory Campbell-Lange
We have many databases of the same type separated for data governance
reasons. They, however, share the same web front-end code.

Presently, replacing functions and performing data updates on the
databases in series often executes across all databases in less than a
minute. (The updates are currently done with simple sql files connecting
to each database and then loading a stub file pointing to each function
to drop and reload, and running the data update queries.)

However, for larger updates, the time when the front end code is
out-of-step with the database can cause end-user problems.

Unfortunately our schema arrangement isn't clean enough to swap out
function schemas in a transaction to sort out that part of the problem
(if in fact that would work anyway).

One solution might be to do the updates in parallel. Another thought
would be to somehow execute the code update from a text field in a table
in each database triggered with pg_cron.

Bearing in mind the possible problems of connection saturation or
massive IO spikes, I'd be grateful to learn of any thoughts on how to
negotiate this problem.

Rory




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Adrian Klaver

On 3/4/20 2:04 PM, Rory Campbell-Lange wrote:

We have many databases of the same type separated for data governance
reasons. They, however, share the same web front-end code.

Presently, replacing functions and performing data updates on the
databases in series often executes across all databases in less than a
minute. (The updates are currently done with simple sql files connecting
to each database and then loading a stub file pointing to each function
to drop and reload, and running the data update queries.)

However, for larger updates, the time when the front end code is
out-of-step with the database can cause end-user problems.


So the issue is synchronization between the code in the database and the 
code outside the database?


I'm assuming the problems are changes in function signatures and return 
values?




Unfortunately our schema arrangement isn't clean enough to swap out
function schemas in a transaction to sort out that part of the problem
(if in fact that would work anyway).

One solution might be to do the updates in parallel. Another thought
would be to somehow execute the code update from a text field in a table
in each database triggered with pg_cron.

Bearing in mind the possible problems of connection saturation or
massive IO spikes, I'd be grateful to learn of any thoughts on how to
negotiate this problem.

Rory





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Rory Campbell-Lange
On 04/03/20, Adrian Klaver (adrian.kla...@aklaver.com) wrote:
> On 3/4/20 2:04 PM, Rory Campbell-Lange wrote:
> > We have many databases of the same type separated for data governance
> > reasons. They, however, share the same web front-end code.
> > 
> > Presently, replacing functions and performing data updates on the
> > databases in series often executes across all databases in less than a
> > minute. (The updates are currently done with simple sql files connecting
> > to each database and then loading a stub file pointing to each function
> > to drop and reload, and running the data update queries.)
> > 
> > However, for larger updates, the time when the front end code is
> > out-of-step with the database can cause end-user problems.
> 
> So the issue is synchronization between the code in the database and the
> code outside the database?
> 
> I'm assuming the problems are changes in function signatures and return
> values?

That is one problem; sometimes we also need to make some table
definition or data changes.

> > Unfortunately our schema arrangement isn't clean enough to swap out
> > function schemas in a transaction to sort out that part of the problem
> > (if in fact that would work anyway).
> > 
> > One solution might be to do the updates in parallel. Another thought
> > would be to somehow execute the code update from a text field in a table
> > in each database triggered with pg_cron.
> > 
> > Bearing in mind the possible problems of connection saturation or
> > massive IO spikes, I'd be grateful to learn of any thoughts on how to
> > negotiate this problem.




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Adrian Klaver

On 3/4/20 2:22 PM, Rory Campbell-Lange wrote:

On 04/03/20, Adrian Klaver (adrian.kla...@aklaver.com) wrote:

On 3/4/20 2:04 PM, Rory Campbell-Lange wrote:

We have many databases of the same type separated for data governance
reasons. They, however, share the same web front-end code.

Presently, replacing functions and performing data updates on the
databases in series often executes across all databases in less than a
minute. (The updates are currently done with simple sql files connecting
to each database and then loading a stub file pointing to each function
to drop and reload, and running the data update queries.)

However, for larger updates, the time when the front end code is
out-of-step with the database can cause end-user problems.


So the issue is synchronization between the code in the database and the
code outside the database?

I'm assuming the problems are changes in function signatures and return
values?


That is one problem; sometimes we also need to make some table
definition or data changes.



Alright, but the general issue is that the world as seen by the database 
can be different from that seen by the front end code.


So the solution is to make those world views sync, or am I missing 
something?



Unfortunately our schema arrangement isn't clean enough to swap out
function schemas in a transaction to sort out that part of the problem
(if in fact that would work anyway).

One solution might be to do the updates in parallel. Another thought
would be to somehow execute the code update from a text field in a table
in each database triggered with pg_cron.

Bearing in mind the possible problems of connection saturation or
massive IO spikes, I'd be grateful to learn of any thoughts on how to
negotiate this problem.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 3:04 PM Rory Campbell-Lange 
wrote:

> However, for larger updates, the time when the front end code is
> out-of-step with the database can cause end-user problems.
>

You should try very hard to structure your database migrations so that
instead of going directly from uniquely valid state to another uniquely
valid state you instead transition to a dual-ly valid state (i.e., don't
break the old way of doing things while adding the pieces to make the new
way of doing things work) and then separately remove the old valid state
components once you know all of the software upgrades have been deployed.

Add New Stuff, Leave Old Stuff Alone
Remove Old Stuff

Triggers can be installed during the transition period to facilitate the
duplication of data that will result.

This should be considerably easier for non-data impacting updates as you
can just choose different names for the new stuff then remove the old stuff
separately.

David J.


Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Rory Campbell-Lange
On 04/03/20, Adrian Klaver (adrian.kla...@aklaver.com) wrote:
> On 3/4/20 2:22 PM, Rory Campbell-Lange wrote:
> > On 04/03/20, Adrian Klaver (adrian.kla...@aklaver.com) wrote:
> > > On 3/4/20 2:04 PM, Rory Campbell-Lange wrote:
> > > > We have many databases of the same type separated for data governance
> > > > reasons. They, however, share the same web front-end code.
> > > > 
> > > > Presently, replacing functions and performing data updates on the
> > > > databases in series often executes across all databases in less than a
> > > > minute. (The updates are currently done with simple sql files connecting
> > > > to each database and then loading a stub file pointing to each function
> > > > to drop and reload, and running the data update queries.)
> > > > 
> > > > However, for larger updates, the time when the front end code is
> > > > out-of-step with the database can cause end-user problems.
> > > 
> > > So the issue is synchronization between the code in the database and the
> > > code outside the database?
> > > 
> > > I'm assuming the problems are changes in function signatures and return
> > > values?
> > 
> > That is one problem; sometimes we also need to make some table
> > definition or data changes.
> 
> Alright, but the general issue is that the world as seen by the database can
> be different from that seen by the front end code.
> 
> So the solution is to make those world views sync, or am I missing
> something?

Essentially we wish to reduce the window where the frontend and backend
aren't synchronised.

If we have (for example) 200 databases which each take 2 seconds to
update, a client could be on the wrong frontend code for over 6 minutes.

> > > > Unfortunately our schema arrangement isn't clean enough to swap out
> > > > function schemas in a transaction to sort out that part of the problem
> > > > (if in fact that would work anyway).
> > > > 
> > > > One solution might be to do the updates in parallel. Another thought
> > > > would be to somehow execute the code update from a text field in a table
> > > > in each database triggered with pg_cron.
> > > > 
> > > > Bearing in mind the possible problems of connection saturation or
> > > > massive IO spikes, I'd be grateful to learn of any thoughts on how to
> > > > negotiate this problem.




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Ron

On 3/4/20 4:33 PM, Rory Campbell-Lange wrote:

On 04/03/20, Adrian Klaver (adrian.kla...@aklaver.com) wrote:

On 3/4/20 2:22 PM, Rory Campbell-Lange wrote:

On 04/03/20, Adrian Klaver (adrian.kla...@aklaver.com) wrote:

On 3/4/20 2:04 PM, Rory Campbell-Lange wrote:

We have many databases of the same type separated for data governance
reasons. They, however, share the same web front-end code.

Presently, replacing functions and performing data updates on the
databases in series often executes across all databases in less than a
minute. (The updates are currently done with simple sql files connecting
to each database and then loading a stub file pointing to each function
to drop and reload, and running the data update queries.)

However, for larger updates, the time when the front end code is
out-of-step with the database can cause end-user problems.

So the issue is synchronization between the code in the database and the
code outside the database?

I'm assuming the problems are changes in function signatures and return
values?

That is one problem; sometimes we also need to make some table
definition or data changes.

Alright, but the general issue is that the world as seen by the database can
be different from that seen by the front end code.

So the solution is to make those world views sync, or am I missing
something?

Essentially we wish to reduce the window where the frontend and backend
aren't synchronised.

If we have (for example) 200 databases which each take 2 seconds to
update, a client could be on the wrong frontend code for over 6 minutes.


The only solution to that is parallel updates (not all 200 at once!!) with a 
progress bar.



Unfortunately our schema arrangement isn't clean enough to swap out
function schemas in a transaction to sort out that part of the problem
(if in fact that would work anyway).

One solution might be to do the updates in parallel. Another thought
would be to somehow execute the code update from a text field in a table
in each database triggered with pg_cron.

Bearing in mind the possible problems of connection saturation or
massive IO spikes, I'd be grateful to learn of any thoughts on how to
negotiate this problem.




--
Angular momentum makes the world go 'round.




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Guyren Howe
On Mar 4, 2020, at 14:33 , Rory Campbell-Lange  wrote:
> 
> Essentially we wish to reduce the window where the frontend and backend
> aren't synchronised.
> 
> If we have (for example) 200 databases which each take 2 seconds to
> update, a client could be on the wrong frontend code for over 6 minutes.

Send each of the servers a PL/PGSQL method that executes all the things in a 
transaction and then waits until the same clock time to commit. Then all the 
servers are committing at the same moment. They will still be out of synch 
somewhat, but this would reduce the degree.


Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Rory Campbell-Lange
On 04/03/20, Guyren Howe (guy...@gmail.com) wrote:
> On Mar 4, 2020, at 14:33 , Rory Campbell-Lange  
> wrote:
> > 
> > Essentially we wish to reduce the window where the frontend and backend
> > aren't synchronised.
> > 
> > If we have (for example) 200 databases which each take 2 seconds to
> > update, a client could be on the wrong frontend code for over 6 minutes.
> 
> Send each of the servers a PL/PGSQL method that executes all the
> things in a transaction and then waits until the same clock time to
> commit. Then all the servers are committing at the same moment. They
> will still be out of synch somewhat, but this would reduce the degree.

This is a really interesting idea. 

Any thoughts on how to wrap pl/pgsql function dropping and recreation code
within a wrapper pl/pgsql function? 




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 3:48 PM Rory Campbell-Lange 
wrote:

> Any thoughts on how to wrap pl/pgsql function dropping and recreation code
> within a wrapper pl/pgsql function?


Not endorsing this but dynamic SQL works just fine (though can get hard to
read).  Use format() and EXECUTE ... USING liberally.

CREATE FUNCTION perform_update()...
AS $outer$
BEGIN

drop_sql := $inner$ DROP FUNCTION ...; $inner$
EXECUTE drop_sql;

END;
$outer$;

David J.


Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 3:55 PM David G. Johnston 
wrote:

> On Wed, Mar 4, 2020 at 3:48 PM Rory Campbell-Lange <
> r...@campbell-lange.net> wrote:
>
>> Any thoughts on how to wrap pl/pgsql function dropping and recreation code
>> within a wrapper pl/pgsql function?
>
>
> Not endorsing this but dynamic SQL works just fine (though can get hard to
> read).  Use format() and EXECUTE ... USING liberally.
>
>
Or, more readable depending upon your trust level:

INSERT INTO dynamic_codes VALUES (1, 'DROP FUNCTION ...');

CREATE FUNCTION execute_dynamic(code_id int)
AS $$
sql_cmd := (SELECT val FROM dynamic_codes WHERE id = code_id);
EXECUTE sql_cmd;
$$;

SELECT execute_dynamic(1);

David J.


Determining the type of an obkect in plperl

2020-03-04 Thread stan
Probably a bit off topic, but I suspect someone on this list knows how to do
this.

I am in the process of writing a plperl function. In this function I need
to compare the data in the NEW versus OLD structures. I am writing this as a
generic subroutine, so I am looping through and comparing the 2 to see what
is different. Problem is, that I need to know whether to us n != or a ne
comparison.

how can I determine what the data type of the value element is?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Adrian Klaver

On 3/4/20 2:42 PM, Guyren Howe wrote:
On Mar 4, 2020, at 14:33 , Rory Campbell-Lange > wrote:


Essentially we wish to reduce the window where the frontend and backend
aren't synchronised.

If we have (for example) 200 databases which each take 2 seconds to
update, a client could be on the wrong frontend code for over 6 minutes.


Send each of the servers a PL/PGSQL method that executes all the things 
in a transaction and then waits until the same clock time to commit. 


How do you know what the clock time will be?

Then all the servers are committing at the same moment. They will still 
be out of synch somewhat, but this would reduce the degree.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Rory Campbell-Lange
On 04/03/20, David G. Johnston (david.g.johns...@gmail.com) wrote:
> On Wed, Mar 4, 2020 at 3:55 PM David G. Johnston 
> wrote:
> 
> > On Wed, Mar 4, 2020 at 3:48 PM Rory Campbell-Lange <
> > r...@campbell-lange.net> wrote:
> >
> >> Any thoughts on how to wrap pl/pgsql function dropping and recreation code
> >> within a wrapper pl/pgsql function?
> >
> >
> > Not endorsing this but dynamic SQL works just fine (though can get hard to
> > read).  Use format() and EXECUTE ... USING liberally.
> >
> >
> Or, more readable depending upon your trust level:
> 
> INSERT INTO dynamic_codes VALUES (1, 'DROP FUNCTION ...');
> 
> CREATE FUNCTION execute_dynamic(code_id int)
> AS $$
> sql_cmd := (SELECT val FROM dynamic_codes WHERE id = code_id);
> EXECUTE sql_cmd;
> $$;
> 
> SELECT execute_dynamic(1);

Thanks very much for the useful examples.

Based on your second example, we could drop and then reload a upgrade
schema with entries in dynamic_codes then use execute_dynamic(...) as
you suggest.

Any idea on how to run execute_dynamic across many databases at roughly
the same time?

I'm just wondering if Guyren Howe's idea of having many transactions
open waiting for a clock time to commit is in fact feasible due to
(presumably) having to have all the connections open to every database
from the client until the transactions complete.





Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 4:41 PM Rory Campbell-Lange 
wrote:

> Any idea on how to run execute_dynamic across many databases at roughly
> the same time?
>
> I'm just wondering if Guyren Howe's idea of having many transactions
> open waiting for a clock time to commit is in fact feasible due to
> (presumably) having to have all the connections open to every database
> from the client until the transactions complete.
>

Clock time synchronization is possible so its largely a matter of resources
at that point.  If your servers are on machines where you can get shell
having the server run psql on its own databases should provide sufficient.

I'll go back to my earlier comment, on a separate line of thought, which
may have been missed, in that having two commits involved here is probably
a better option.  First commit is setup to allow both the old and new
software to continue working normally.  The second commit then removes the
functionality the older software versions are using - after they've been
phased out.

David J.


Re: Determining the type of an obkect in plperl

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 4:21 PM stan  wrote:

> Probably a bit off topic, but I suspect someone on this list knows how to
> do
> this.
>
> I am in the process of writing a plperl function. In this function I need
> to compare the data in the NEW versus OLD structures. I am writing this as
> a
> generic subroutine, so I am looping through and comparing the 2 to see what
> is different. Problem is, that I need to know whether to us n != or a ne
> comparison.
>
> how can I determine what the data type of the value element is?
>

Not up to speed on Perl but you basically want everything to be done using
string equality - can't you just use "ne" everywhere and not worry about
comparing numbers using string comparison logic?  Might want to disabled
warnings...

That would have to be faster than executing a type_of function on every
single column.

Then measure performance and decide whether a generic routine is performant
enough.  If not you might try creating custom function dynamically using
the catalogs as input.

David J.


Re: Advice request : simultaneous function/data updates on many databases

2020-03-04 Thread Rory Campbell-Lange
On 04/03/20, David G. Johnston (david.g.johns...@gmail.com) wrote:
> On Wed, Mar 4, 2020 at 4:41 PM Rory Campbell-Lange 
> wrote:
> 
> > Any idea on how to run execute_dynamic across many databases at roughly
> > the same time?
> >
> > I'm just wondering if Guyren Howe's idea of having many transactions
> > open waiting for a clock time to commit is in fact feasible due to
> > (presumably) having to have all the connections open to every database
> > from the client until the transactions complete.
> >
> 
> Clock time synchronization is possible so its largely a matter of resources
> at that point.  If your servers are on machines where you can get shell
> having the server run psql on its own databases should provide sufficient.

Yes, that is how we do it at present. We'll have to do some tests.

> I'll go back to my earlier comment, on a separate line of thought, which
> may have been missed, in that having two commits involved here is probably
> a better option.  First commit is setup to allow both the old and new
> software to continue working normally.  The second commit then removes the
> functionality the older software versions are using - after they've been
> phased out.

I did miss that point; thanks for reiterating it.

I think the issue we will have with old/new coexistence is that we would
sometimes hit the "cannot find best candidate" function signature
problem, as we often extend existing function arguments with new
arguments with defaults.

But it is certainly something worth testing.

Thanks a lot for the pointers.





Re: Determining the type of an obkect in plperl

2020-03-04 Thread Ravi Krishna





how can I determine what the data type of the value element is?


perl has a ref function which can tell what type of object.

https://perldoc.perl.org/functions/ref.html





--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus





Performance Problem

2020-03-04 Thread Kordexa Stroken
 Hello iam working on a project and using Postgresql 10.My main server 2 x
E5 2680 v3 Cpu NVMe M.2 500 Gb Disk 32 GB Ram and my postgresql server is a
VM(VirtualBox) Centos7 16GB ram Postgresql 10 Db size 4 GB and my solution
is N-Tier Architecture & 40 online user . My postgresql.conf here:

https://pastebin.com/Y8ybSxq4

I have performance problem with postgresql.What is wrong ? I can not find
what is problem.


`DROP DATABASE RESTRICT` ?

2020-03-04 Thread Brennan Vincent
Hello all,

I was wondering if there is some reason why Postgres doesn't support a
`RESTRICT` option that would refuse to drop a database if it contains
any schemas or objects, similarly to the semantics of `DROP SCHEMA
[...] RESTRICT`.

Is there a fundamental reason not to support it, or is it just that
nobody has implemented it yet?

Thanks
Brennan




Re: `DROP DATABASE RESTRICT` ?

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 6:17 PM Brennan Vincent 
wrote:

> Hello all,
>
> I was wondering if there is some reason why Postgres doesn't support a
> `RESTRICT` option that would refuse to drop a database if it contains
> any schemas or objects, similarly to the semantics of `DROP SCHEMA
> [...] RESTRICT`.
>
> Is there a fundamental reason not to support it, or is it just that
> nobody has implemented it yet?
>

Fundamentally you are not connected to the database you are dropping and so
don't have access to the knowledge of whether its populated or not.  It
also likely has at least an empty public schema...

David J.


Re: Performance Problem

2020-03-04 Thread David G. Johnston
On Wed, Mar 4, 2020 at 6:17 PM Kordexa Stroken  wrote:

>  Hello iam working on a project and using Postgresql 10.My main server 2 x
> E5 2680 v3 Cpu NVMe M.2 500 Gb Disk 32 GB Ram and my postgresql server is a
> VM(VirtualBox) Centos7 16GB ram Postgresql 10 Db size 4 GB and my solution
> is N-Tier Architecture & 40 online user . My postgresql.conf here:
>
> https://pastebin.com/Y8ybSxq4
>
> I have performance problem with postgresql.What is wrong ? I can not find
> what is problem.
>

Not really sure how anyone else would be able to either.

Here are some suggestions for how to request help that is actionable:

https://wiki.postgresql.org/wiki/Slow_Query_Questions

Start specific and then generalize.

David J.


Re: Performance Problem

2020-03-04 Thread Justin
taking a quick glance at config file I do not see any modifications to any
key settings

shared_buffers,
efffecttive cache size
work_mem

meaning the server is running at the default settings which results in
horrible performance

here is a website that gives suggested config changes based on server
hardware.  https://pgtune.leopard.in.ua/#/

That said to get help on slow queries  need to describe the  tables in
question, post the query in question along with explain/analyze results

On Wed, Mar 4, 2020 at 8:17 PM Kordexa Stroken  wrote:

>  Hello iam working on a project and using Postgresql 10.My main server 2 x
> E5 2680 v3 Cpu NVMe M.2 500 Gb Disk 32 GB Ram and my postgresql server is a
> VM(VirtualBox) Centos7 16GB ram Postgresql 10 Db size 4 GB and my solution
> is N-Tier Architecture & 40 online user . My postgresql.conf here:
>
> https://pastebin.com/Y8ybSxq4
>
> I have performance problem with postgresql.What is wrong ? I can not find
> what is problem.
>


Real application clustering in postgres.

2020-03-04 Thread Daulat Ram
Hi team,

Is there any possibility/options to setup a real application clustering in 
Postgres as in Oracle we have a  RAC feature.

What about multi-master replication in Postgres. would you please suggest how 
it is useful and how can setup it.

Thanks.