Re: Required postgreSQL 10.4 version for Suse enterprise

2019-04-01 Thread Ron
Go up a level from 
https://zypp.postgresql.org/10/suse/sles-12-x86_64/repoview/postgresql10-server.html 
to 
https://zypp.postgresql.org/10/suse/sles-12-x86_64/repoview/letter_p.group.html 
then get all the packages you need.


On 4/1/19 2:42 AM, Ankit Trivedi wrote:

Hello,

We have tried to install the RPM provided in trail mail, and attached it 
the error we got, we forced install ignoring error and we found some of 
the dependencies is missing.


Like psql which is mandatory.


Thanks & Regards,
*Ankit Trivedi,*
Sysem Admin - IT
ankit.triv...@nascentinfo.com 
+91-9408771306


Nascent Info Technologies Pvt. Ltd.

*Nascent Info Technologies Pvt. Ltd.*

AF-1 ,Shapath IV, Opp. Karnavati Club
SG Highway, Ahmedabad - 380 051

Tel: +91 79 4032 1200 

*www.nascentinfo.com* 




On Sun, Mar 31, 2019 at 8:39 PM Ankit Trivedi 
mailto:ankit.triv...@nascentinfo.com>> wrote:


plz find below

-- Forwarded message -
From: *Adrian Klaver* mailto:adrian.kla...@aklaver.com>>
Date: Sun, Mar 31, 2019, 20:27
Subject: Re: Required postgreSQL 10.4 version for Suse enterprise
To: Ankit Trivedi mailto:ankit.triv...@nascentinfo.com>>
Cc: mailto:pgsql-gene...@postgresql.org>>


On 3/30/19 10:45 PM, Ankit Trivedi wrote:
> Dear Adrian,
>
> Thanks for your prompt reply.
>
> But i specifically need 10.4 version.
>
> plz provide me steps or rpm for 10.4


https://zypp.postgresql.org/10/suse/sles-12-x86_64/repoview/postgresql10-server.html

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Angular momentum makes the world go 'round.


RE: Table Export & Import

2019-04-01 Thread ROS Didier
Hi
One solution could be to use intel technology: FPGA : 
https://www.intel.fr/content/www/fr/fr/products/programmable.html
the principle is to add an PCI electronic card on the server with CPUs and RAM.
this greatly speeds up the loading of the data into the database.

Best Regards

[cid:image002.png@01D14E0E.8515EB90]
[Certification-DALIBO]


Didier ROS
Expertise SGBD
EDF - DTEO - DSIT - IT DMA
Département Solutions Groupe
Groupe Performance Applicative
32 avenue Pablo Picasso
92000 NANTERRE

didier@edf.fr
Tél. : +33 6 49 51 11 88
[cid:image003.png@01D4BE20.1EAF68B0][cid:image004.png@01D4BE20.1EAF68B0]



De : satcs...@gmail.com [mailto:satcs...@gmail.com]
Envoyé : lundi 1 avril 2019 08:10
À : pgsql-gene...@postgresql.org >> PG-General Mailing List 

Objet : Table Export & Import

Hi Team,

We have a requirement to copy a table from one database server to another 
database server. We are looking for a solution to achieve this with lesser 
downtime on Prod. Can you help us with this?

Table Size: 160GB
Postgresql Server Version: 9.5





Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


Re: Table Export & Import

2019-04-01 Thread Sathish Kumar
Hi Ros,

Using server on Cloud.

On Mon, Apr 1, 2019, 5:26 PM ROS Didier  wrote:

> Hi
>
> One solution could be to use intel technology: FPGA :
> https://www.intel.fr/content/www/fr/fr/products/programmable.html
>
> the principle is to add an PCI electronic card on the server with CPUs and
> RAM.
>
> this greatly speeds up the loading of the data into the database.
>
>
>
> Best Regards
>
>
>
> [image: cid:image002.png@01D14E0E.8515EB90]
>
> [image: Certification-DALIBO]
>
>
> *Didier ROS*
>
> *Expertise SGBD*
>
> EDF - DTEO - DSIT - IT DMA
>
> Département Solutions Groupe
>
> Groupe Performance Applicative
>
> 32 avenue Pablo Picasso
>
> 92000 NANTERRE
>
>
>
> *didier@edf.fr *
>
> Tél. : +33 6 49 51 11 88
>
> [image: cid:image003.png@01D4BE20.1EAF68B0] [image:
> cid:image004.png@01D4BE20.1EAF68B0]
>
>
>
>
>
> *De :* satcs...@gmail.com [mailto:satcs...@gmail.com]
> *Envoyé :* lundi 1 avril 2019 08:10
> *À :* pgsql-gene...@postgresql.org >> PG-General Mailing List <
> pgsql-gene...@postgresql.org>
> *Objet :* Table Export & Import
>
>
>
> Hi Team,
>
>
>
> We have a requirement to copy a table from one database server to another
> database server. We are looking for a solution to achieve this with lesser
> downtime on Prod. Can you help us with this?
>
>
>
> Table Size: 160GB
>
> Postgresql Server Version: 9.5
>
>
>
>
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont
> établis à l'intention exclusive des destinataires et les informations qui y
> figurent sont strictement confidentielles. Toute utilisation de ce Message
> non conforme à sa destination, toute diffusion ou toute publication totale
> ou partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de
> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
> sur quelque support que ce soit. Nous vous remercions également d'en
> avertir immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for
> the addressees. The information contained in this Message is confidential.
> Any use of information contained in this Message not in accord with its
> purpose, any dissemination or disclosure, either whole or partial, is
> prohibited except formal approval.
>
> If you are not the addressee, you may not copy, forward, disclose or use
> any part of it. If you have received this message in error, please delete
> it and all copies from your system and notify the sender immediately by
> return message.
>
> E-mail communication cannot be guaranteed to be timely secure, error or
> virus-free.
>


Re: CVE-2019-9193 about COPY FROM/TO PROGRAM

2019-04-01 Thread Magnus Hagander
On Sat, Mar 30, 2019 at 10:16 PM Tom Lane  wrote:

> "Daniel Verite"  writes:
> > I've noticed this post being currently shared on social media:
>
> >
> https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/cve-2019-9193-authenticated-arbitrary-command-execution-on-postgresql-9-3/
>
> > The claim that COPY FROM PROGRAM warrants a CVE seems groundless
> > because you need to be superuser in the first place to do that.
>
> Yeah; this is supposing that there is a security boundary between
> Postgres superusers and the OS account running the server, which
> there is not.  We could hardly have features like untrusted PLs
> if we were trying to maintain such a boundary.
>
> > I don't know if there are precedents of people claiming
> > CVE entries on Postgres without seemingly reaching out to the
> > community first. Should something be done proactively about
> > that particular claim?
>
> Well, it's odd, because somebody at trustwave (not the actual
> author of this "research") did reach out to the pgsql-security
> list, and we discussed with him that it wasn't a violation of
> Postgres' security model, and he agreed.  But then they've
> posted this anyway.  Left hand doesn't talk to right hand there,
> apparently.
>

I wonder if we need to prepare some sort of official response to that.

I was considering writing up a blog post about it, but maybe we need
something more official?

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


logical replication - negative bitmapset member not allowed

2019-04-01 Thread Tim Clarke
I'm getting this message every 5 seconds on a single-master,
single-slave replication of PG10.7->PG10.7 both on Centos. Its over the
'net but otherwise seems to perform excellently. Any ideas what's
causing it and how to fix?

--
Tim Clarke
IT Director
Direct: +44 (0)1376 504510 | Mobile: +44 (0)7887 563420



Main: +44 (0)1376 503500 | Fax: +44 (0)1376 503550
Web: https://www.manifest.co.uk/



Minerva Analytics Ltd
9 Freebournes Court | Newland Street | Witham | Essex | CM8 2BL | England



Copyright: This e-mail may contain confidential or legally privileged 
information. If you are not the named addressee you must not use or disclose 
such information, instead please report it to 
ad...@minerva.info
Legal:  Minerva Analytics is the trading name of: Minerva Analytics
Ltd: Registered in England Number 11260966 & The Manifest Voting Agency Ltd: 
Registered in England Number 2920820 Registered Office at above address. Please 
Click Here >> for further information.


Re: CVE-2019-9193 about COPY FROM/TO PROGRAM

2019-04-01 Thread Tom Lane
Magnus Hagander  writes:
> On Sat, Mar 30, 2019 at 10:16 PM Tom Lane  wrote:
>> Yeah; this is supposing that there is a security boundary between
>> Postgres superusers and the OS account running the server, which
>> there is not.  We could hardly have features like untrusted PLs
>> if we were trying to maintain such a boundary.

> I wonder if we need to prepare some sort of official response to that.
> I was considering writing up a blog post about it, but maybe we need
> something more official?

Blog post seems like a good idea.  As for an "official" response,
it strikes me that maybe we need better documentation.  I'm not sure
that we spell out anywhere what we think the security model is.
There are plenty of scattered warnings about unsafe things, but
if there's any specific statement equivalent to what I just
wrote above, I can't remember where.

(By the same token, I'm not sure where would be a good place
to put it ...)

regards, tom lane




Re: logical replication - negative bitmapset member not allowed

2019-04-01 Thread Tom Lane
Tim Clarke  writes:
> I'm getting this message every 5 seconds on a single-master,
> single-slave replication of PG10.7->PG10.7 both on Centos. Its over the
> 'net but otherwise seems to perform excellently. Any ideas what's
> causing it and how to fix?

That'd certainly be a bug, but we'd need to reproduce it to fix it.
What are you doing that's different from everybody else?  Can you
provide any other info to narrow down the problem?

regards, tom lane




Re: CVE-2019-9193 about COPY FROM/TO PROGRAM

2019-04-01 Thread Jonathan S. Katz


> On Apr 1, 2019, at 9:55 AM, Tom Lane  wrote:
> 
> Magnus Hagander  writes:
>>> On Sat, Mar 30, 2019 at 10:16 PM Tom Lane  wrote:
>>> Yeah; this is supposing that there is a security boundary between
>>> Postgres superusers and the OS account running the server, which
>>> there is not.  We could hardly have features like untrusted PLs
>>> if we were trying to maintain such a boundary.
> 
>> I wonder if we need to prepare some sort of official response to that.
>> I was considering writing up a blog post about it, but maybe we need
>> something more official?
> 
> Blog post seems like a good idea.  As for an "official" response,
> it strikes me that maybe we need better documentation.

+1, though I’d want to see if people get noisier about it before we rule
out an official response.

A blog post from a reputable author who can speak to security should
be good enough and we can make noise through our various channels.

Jonathan 




Re: CVE-2019-9193 about COPY FROM/TO PROGRAM

2019-04-01 Thread Alvaro Herrera
On 2019-Apr-01, Tom Lane wrote:

> Magnus Hagander  writes:
> > On Sat, Mar 30, 2019 at 10:16 PM Tom Lane  wrote:
> >> Yeah; this is supposing that there is a security boundary between
> >> Postgres superusers and the OS account running the server, which
> >> there is not.  We could hardly have features like untrusted PLs
> >> if we were trying to maintain such a boundary.
> 
> > I wonder if we need to prepare some sort of official response to that.
> > I was considering writing up a blog post about it, but maybe we need
> > something more official?
> 
> Blog post seems like a good idea.  As for an "official" response,
> it strikes me that maybe we need better documentation.  I'm not sure
> that we spell out anywhere what we think the security model is.
> There are plenty of scattered warnings about unsafe things, but
> if there's any specific statement equivalent to what I just
> wrote above, I can't remember where.
> 
> (By the same token, I'm not sure where would be a good place
> to put it ...)

Apparently we had a "Security" chapter in version 7.0, which got
removed, and we recently got a complaint about that:
https://postgr.es/m/cabxk5gcbhlsdrhzvnkuykk2gr+tcwkneq2e-wnf6otsjtq3...@mail.gmail.com

I think Peter is right that we may not want to duplicate the contents of
each section, but I think it makes sense to have a chapter in "Part III.
Server Administration", maybe just after chapters 26 or 27, where some
security considerations are put forth with references to the detailed
docs on security for each aspect of the system.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Gigantic load average spikes

2019-04-01 Thread Adrian Klaver

On 3/31/19 10:08 PM, rihad wrote:

What exactly do you mean by "running processes"? I don't think I've ever
seen a Unix with only 1 to 3 running processes in total, so you are
probably referring to processes in a certain state. Runnable (R)?
Uninterruptible sleep (D)? Both? Something else?


Just that, 250-300 running processes in top shown for a second. 250-300 
is the number of postgres worker processes used, but normally only 1-3 
of them are running according to top. At times of load FreeBSD (?) 


So what are the process titles?

schedules all of them to run. This doesn't really put the machine on its 
knees, it just impacts load avg.








--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Table Export & Import

2019-04-01 Thread Adrian Klaver

On 3/31/19 11:09 PM, Sathish Kumar wrote:

Hi Team,

We have a requirement to copy a table from one database server to 
another database server. We are looking for a solution to achieve this 
with lesser downtime on Prod. Can you help us with this?


So what is creating the downtime now?

In addition to other suggestions you might want to take a look at:

https://www.postgresql.org/docs/9.5/postgres-fdw.html




Table Size: 160GB
Postgresql Server Version: 9.5





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Table Export & Import

2019-04-01 Thread Michel Pelletier
As other have pointed out, you can take a pg_dump at anytime.  You can
provide arguments to pg_dump to only dump a subset of the database (like
one table).  Also mentioned is using a foreign data wrapper (FDW).  yet
another approach is to use the "copy to/from program" command to stream the
table from one db to the other using netcat (nc):

On destination server:

\copy table_name to program "nc origin_server ";

On origin server:

\copy table_name from program ''nc -l ";

If your network is slower than your cpu you can compress the data by piping
it through lz4 (or gzip/bzip whatever, but lz4 is fast both directions for
streaming).

-Michel


On Sun, Mar 31, 2019 at 11:10 PM Sathish Kumar  wrote:

> Hi Team,
>
> We have a requirement to copy a table from one database server to another
> database server. We are looking for a solution to achieve this with lesser
> downtime on Prod. Can you help us with this?
>
> Table Size: 160GB
> Postgresql Server Version: 9.5
>
>
>


Re: Table Export & Import

2019-04-01 Thread Sathish Kumar
Hi Adrian,
We are exporting live table data to a new database, so we need to stop our
application until the export/import is completed. We would like to minimise
this downtime.

On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver 
wrote:

> On 3/31/19 11:09 PM, Sathish Kumar wrote:
> > Hi Team,
> >
> > We have a requirement to copy a table from one database server to
> > another database server. We are looking for a solution to achieve this
> > with lesser downtime on Prod. Can you help us with this?
>
> So what is creating the downtime now?
>
> In addition to other suggestions you might want to take a look at:
>
> https://www.postgresql.org/docs/9.5/postgres-fdw.html
>
>
> >
> > Table Size: 160GB
> > Postgresql Server Version: 9.5
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Gigantic load average spikes

2019-04-01 Thread rihad

On 04/01/2019 06:17 PM, Adrian Klaver wrote:

On 3/31/19 10:08 PM, rihad wrote:
What exactly do you mean by "running processes"? I don't think I've 
ever

seen a Unix with only 1 to 3 running processes in total, so you are
probably referring to processes in a certain state. Runnable (R)?
Uninterruptible sleep (D)? Both? Something else?


Just that, 250-300 running processes in top shown for a second. 
250-300 is the number of postgres worker processes used, but normally 
only 1-3 of them are running according to top. At times of load 
FreeBSD (?) 


So what are the process titles?


postgres





Re: Table Export & Import

2019-04-01 Thread Ron

/"so we need to stop our application until the export/import is completed."/

Why?

On 4/1/19 9:47 AM, Sathish Kumar wrote:

Hi Adrian,
We are exporting live table data to a new database, so we need to stop our 
application until the export/import is completed. We would like to 
minimise this downtime.


On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver > wrote:


On 3/31/19 11:09 PM, Sathish Kumar wrote:
> Hi Team,
>
> We have a requirement to copy a table from one database server to
> another database server. We are looking for a solution to achieve this
> with lesser downtime on Prod. Can you help us with this?

So what is creating the downtime now?

In addition to other suggestions you might want to take a look at:

https://www.postgresql.org/docs/9.5/postgres-fdw.html


>
> Table Size: 160GB
> Postgresql Server Version: 9.5
>
>



--
Angular momentum makes the world go 'round.


Re: Gigantic load average spikes

2019-04-01 Thread Adrian Klaver

On 4/1/19 8:06 AM, rihad wrote:

On 04/01/2019 06:17 PM, Adrian Klaver wrote:

On 3/31/19 10:08 PM, rihad wrote:
What exactly do you mean by "running processes"? I don't think I've 
ever

seen a Unix with only 1 to 3 running processes in total, so you are
probably referring to processes in a certain state. Runnable (R)?
Uninterruptible sleep (D)? Both? Something else?


Just that, 250-300 running processes in top shown for a second. 
250-300 is the number of postgres worker processes used, but normally 
only 1-3 of them are running according to top. At times of load 
FreeBSD (?) 


So what are the process titles?


postgres


Have you tried hitting the 'c' key in top to get a fuller description?







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Table Export & Import

2019-04-01 Thread Michel Pelletier
On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar  wrote:

> Hi Adrian,
> We are exporting live table data to a new database, so we need to stop our
> application until the export/import is completed. We would like to minimise
> this downtime.
>

It's more complicated if you want to keep your application running and
writing to the db while migrating.  There are trigger-level replication
tools, like slony that can be used to stream changes to the new database,
and then you switch over once you get both of them to parity, but there are
some gotchas.  You said the db is only 160GB, it depend a lot on what kind
of schema we're talking about, but I imagine it wouldn't take long to just
take the downtime and do a normal pg_upgrade.


>
> On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver 
> wrote:
>
>> On 3/31/19 11:09 PM, Sathish Kumar wrote:
>> > Hi Team,
>> >
>> > We have a requirement to copy a table from one database server to
>> > another database server. We are looking for a solution to achieve this
>> > with lesser downtime on Prod. Can you help us with this?
>>
>> So what is creating the downtime now?
>>
>> In addition to other suggestions you might want to take a look at:
>>
>> https://www.postgresql.org/docs/9.5/postgres-fdw.html
>>
>>
>> >
>> > Table Size: 160GB
>> > Postgresql Server Version: 9.5
>> >
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>


Re: Table Export & Import

2019-04-01 Thread Sathish Kumar
The table size is 160gb. We would like to move/copy this table fro one db
server to another db server.

On Tue, Apr 2, 2019, 12:17 AM Michel Pelletier 
wrote:

> On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar  wrote:
>
>> Hi Adrian,
>> We are exporting live table data to a new database, so we need to stop
>> our application until the export/import is completed. We would like to
>> minimise this downtime.
>>
>
> It's more complicated if you want to keep your application running and
> writing to the db while migrating.  There are trigger-level replication
> tools, like slony that can be used to stream changes to the new database,
> and then you switch over once you get both of them to parity, but there are
> some gotchas.  You said the db is only 160GB, it depend a lot on what kind
> of schema we're talking about, but I imagine it wouldn't take long to just
> take the downtime and do a normal pg_upgrade.
>
>
>>
>> On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver 
>> wrote:
>>
>>> On 3/31/19 11:09 PM, Sathish Kumar wrote:
>>> > Hi Team,
>>> >
>>> > We have a requirement to copy a table from one database server to
>>> > another database server. We are looking for a solution to achieve this
>>> > with lesser downtime on Prod. Can you help us with this?
>>>
>>> So what is creating the downtime now?
>>>
>>> In addition to other suggestions you might want to take a look at:
>>>
>>> https://www.postgresql.org/docs/9.5/postgres-fdw.html
>>>
>>>
>>> >
>>> > Table Size: 160GB
>>> > Postgresql Server Version: 9.5
>>> >
>>> >
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>


Re: Gigantic load average spikes

2019-04-01 Thread Michel Pelletier
On Sun, Mar 31, 2019 at 10:49 PM David Rowley 
wrote:

>
> Perhaps a bunch of processes waiting on the access exclusive lock on
> the materialized view being released?
>
> log_lock_waits might help you if the MV takes more than a second to
> refresh, otherwise, you might need to have a look at ungranted locks
> in pg_locks and see if the number of locks spikes during the refresh.
>

I think David's got the right idea here.  Like he said, investigate
pg_locks, if it is the refresh materialized view, you can avoid the problem
by doing 'REFRESH MATERIALIZED VIEW CONCURRENTLY'.  You will need at least
one unique index on the table.


> --
>  David Rowley   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>
>
>


Re: Gigantic load average spikes

2019-04-01 Thread rihad

On 04/01/2019 08:30 PM, Michel Pelletier wrote:



On Sun, Mar 31, 2019 at 10:49 PM David Rowley 
mailto:david.row...@2ndquadrant.com>> 
wrote:



Perhaps a bunch of processes waiting on the access exclusive lock on
the materialized view being released?

log_lock_waits might help you if the MV takes more than a second to
refresh, otherwise, you might need to have a look at ungranted locks
in pg_locks and see if the number of locks spikes during the refresh.


I think David's got the right idea here.  Like he said, investigate 
pg_locks, if it is the refresh materialized view, you can avoid the 
problem by doing 'REFRESH MATERIALIZED VIEW CONCURRENTLY'.  You will 
need at least one unique index on the table.


It is actually refreshed concurrently.




-- 
 David Rowley http://www.2ndQuadrant.com/

 PostgreSQL Development, 24x7 Support, Training & Services






Help with insert query

2019-04-01 Thread Glenn Schultz
All,

The query below is designed to insert into a table.  This works when I have
a single loan which I insert.  However, if remove the part of the where
clause of a single loan the insert does not work.  The table fnmloan is a
large table with 500mm + rows and the query runs for about 4 hours.  Any
idea of how to get this to work?  I am a little stumped since the query
works with one loan.

Glenn

SET max_parallel_workers_per_gather = 8;
SET random_page_cost = 1;

truncate fnmloan_balance;
insert into fnmloan_balance (
fctrdt, loanseqnum, secmnem, beginbal, scheduled,endbal,smm
)

select
fnmloan.fctrdt
,fnmloan.loanseqnum
,fnmloan.secmnem
--,fnmloan.orignoterate
--,fnmloan.loanage
--,fnmloan.origloanamt
,fnmloan.currrpb as beginbal
,round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
  cast(fnmloan.remterm - 1 as numeric),
  cast(fnmloan.currrpb as numeric)),4)) as scheduled
,coalesce(endbal.currrpb,0) as endbal
,abs(round(
  cast((fnmloan.currrpb - coalesce(endbal.currrpb,0) -
round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
  cast(fnmloan.remterm - 1 as numeric),
  cast(fnmloan.currrpb as numeric)),4)) )/(fnmloan.currrpb -
round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
  cast(fnmloan.remterm - 1 as numeric),
  cast(fnmloan.currrpb as numeric)),4)) ) as numeric)
  ,4)) as SMM

from
(
 select * from fnmloan
 where
 fctrdt < '03-01-2019'
 and
 loanseqnum = '5991017042'
) as fnmloan


left outer join
(select
fctrdt - interval '1 month' as fctrdt
,loanseqnum
,orignoterate
,loanage
,origloanamt
,currrpb
from fnmloan
) as endbal

on fnmloan.loanseqnum = endbal.loanseqnum
and fnmloan.fctrdt = endbal.fctrdt


Re: Help with insert query

2019-04-01 Thread Michel Pelletier
On Mon, Apr 1, 2019 at 10:37 AM Glenn Schultz  wrote:

> All,
>
> The query below is designed to insert into a table.  This works when I
> have a single loan which I insert.  However, if remove the part of the
> where clause of a single loan the insert does not work.  The table fnmloan
> is a large table with 500mm + rows and the query runs for about 4 hours.
> Any idea of how to get this to work?  I am a little stumped since the query
> works with one loan.
>
>
Inserting one row is fast, inserting 500 million rows is going to take
quite a bit longer.  I suggest your break your query up into batches, and
insert, say, 1 million rows at a time.  Also it might be a good idea to
drop your indexes on the target table and re-create them after you do the
bulk insert, and also do an 'ANALYZE' on the target table after you have
inserted all the records.

-Michel



> Glenn
>
> SET max_parallel_workers_per_gather = 8;
> SET random_page_cost = 1;
>
> truncate fnmloan_balance;
> insert into fnmloan_balance (
> fctrdt, loanseqnum, secmnem, beginbal, scheduled,endbal,smm
> )
>
> select
> fnmloan.fctrdt
> ,fnmloan.loanseqnum
> ,fnmloan.secmnem
> --,fnmloan.orignoterate
> --,fnmloan.loanage
> --,fnmloan.origloanamt
> ,fnmloan.currrpb as beginbal
> ,round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
>   cast(fnmloan.remterm - 1 as numeric),
>   cast(fnmloan.currrpb as numeric)),4)) as scheduled
> ,coalesce(endbal.currrpb,0) as endbal
> ,abs(round(
>   cast((fnmloan.currrpb - coalesce(endbal.currrpb,0) -
> round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
>   cast(fnmloan.remterm - 1 as numeric),
>   cast(fnmloan.currrpb as numeric)),4)) )/(fnmloan.currrpb -
> round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
>   cast(fnmloan.remterm - 1 as numeric),
>   cast(fnmloan.currrpb as numeric)),4)) ) as numeric)
>   ,4)) as SMM
>
> from
> (
>  select * from fnmloan
>  where
>  fctrdt < '03-01-2019'
>  and
>  loanseqnum = '5991017042'
> ) as fnmloan
>
>
> left outer join
> (select
> fctrdt - interval '1 month' as fctrdt
> ,loanseqnum
> ,orignoterate
> ,loanage
> ,origloanamt
> ,currrpb
> from fnmloan
> ) as endbal
>
> on fnmloan.loanseqnum = endbal.loanseqnum
> and fnmloan.fctrdt = endbal.fctrdt
>
>


Re: Key encryption and relational integrity

2019-04-01 Thread Moreno Andreo

Il 29/03/2019 20:23, Adrian Klaver ha scritto:

On 3/29/19 9:01 AM, Moreno Andreo wrote:

And I tried to find a solution, and since I did not like that much 
what I found (and it seems that neither you do :-) ), I came here 
hoping that someone would share his experience to shed some light on 
the topic.


From what you have posted the biggest issue you are having is less 
then real time search on patient names due to the need to meet 
pseudonymisation. To me that is always going to be a problem as there 
are two opposing forces at work, overhead to implement 
pseudonymisation vs quick lookup. Might be time to lower expectations 
on what can be done.



... or just do NOT meet pseudonimization at all, but try to enforce 
other rules suggested bu GDPR.


Peter put in evidence a concept

"

The GDPR
doesn't say how to do that at all (the legislators were wise enough that
any attempt to do that would result in a mess). So you can't say "the
GDPR says we have to do it this way" (and if your consultant says that
it is probably time to get a different one). You have to consider all
the risks (and yes, an attacker getting access to some or all of the
data is a risk, but a doctor not being able to access a patient's
records is also a risk) and implement the best you can do considering
"the state of the art, the costs of implementation", etc.

"

that would be absolutely right. I'm not forced to use pseudonimysation 
if there's the risk to get things worse in a system. I've got to speak 
about these"two opposing forces at work" to a privacy expert (maybe 
choosing another one, as Peter suggested :-) ) and ask him if it could 
be used as a matter of declining pseudonymisation because of 
"pseudonimysation puts at risk overall performance or database integrity"


What do you think?







 hp

















Re: Key encryption and relational integrity

2019-04-01 Thread Rory Campbell-Lange
On 01/04/19, Moreno Andreo (moreno.and...@evolu-s.it) wrote:
...
> I'm not forced to use pseudonimysation if there's the risk to get
> things worse in a system. I've got to speak about these"two opposing
> forces at work" to a privacy expert (maybe choosing another one, as
> Peter suggested :-) ) and ask him if it could be used as a matter of
> declining pseudonymisation because of "pseudonimysation puts at risk
> overall performance or database integrity"

How to interpret the pseudonymisation conditions is ... complicated. The
UK's Information Commissioner's Office (ICO) writes that
pseudoanonymisation relates to:

“…the processing of personal data in such a manner that the personal
data can no longer be attributed to a specific data subject without
the use of additional information, provided that such additional
information is kept separately and is subject to technical and
organisational measures to ensure that the personal data are not
attributed to an identified or identifiable natural person.”

and that this "...can reduce the risks to the data subjects".

The concept of application realms may be relevant to consider here. An
application may be considered GDPR compliant without pseudonymisation if
other measures are taken and the use case is appropriate.

On the other hand, a copy of a production database in testing which has
been pseudonymised may, if compromised, still leak personal data. As the
ICO states:

“…Personal data which have undergone pseudonymisation, which could
be attributed to a natural person by the use of additional
information should be considered to be information on an
identifiable natural person…”

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/what-is-personal-data/what-is-personal-data/

If leakage occurs pseudonymisation has achieved nothing.

Therefore it may be useful to determine if data in a usage realm should
be either fully anonymised or not at all. In the latter case the normal
GDPR controls must all pertain.

Rory






Re: Test mail for pgsql-general

2019-04-01 Thread preejackie

Hi

I'm Praveen Velliengiri, student from India. I'm working on developing a 
Speculative compilation support in LLVM ORC JIT Infrastructure.


As LLVM ORC supports compiling in multiple backend threads, it would be 
effective if we compile the functions speculatively before they are 
called by the executing function. So when we request JIT to compile a 
function, JIT will immediately returns the function address for raw 
executable bits. This will greatly reduce the JIT latencies in modern 
multi-core machines. And also I'm working on designing a ORC in-place 
dynamic profiling support, by this JIT will automatically able to 
identify the hot functions, and compile it in higher optimization level 
to achieve good performance.


I'm proposing this project for GSoC 2019. It would be helpful to know 
how this new features are effective to pgsql engine, so that I include 
your comments in "View from Clients" proposal section.


Please reply :)

--

Have a great day!
PreeJackie



New LLVM JIT Features

2019-04-01 Thread preejackie

Hi

I'm Praveen Velliengiri, student from India. I'm working on developing a 
Speculative compilation support in LLVM ORC JIT Infrastructure.


As LLVM ORC supports compiling in multiple backend threads, it would be 
effective if we compile the functions speculatively before they are 
called by the executing function. So when we request JIT to compile a 
function, JIT will immediately returns the function address for raw 
executable bits. This will greatly reduce the JIT latencies in modern 
multi-core machines. And also I'm working on designing a ORC in-place 
dynamic profiling support, by this JIT will automatically able to 
identify the hot functions, and compile it in higher optimization level 
to achieve good performance.


I'm proposing this project for GSoC 2019. It would be helpful to know 
how this new features are effective to pgsql engine, so that I include 
your comments in "View from Clients" proposal section.


Please reply :)

--

Have a great day!
PreeJackie



Re: Help with insert query

2019-04-01 Thread Ron

On 4/1/19 12:37 PM, Glenn Schultz wrote:

All,

The query below is designed to insert into a table.  This works when I 
have a single loan which I insert. However, if remove the part of the 
where clause of a single loan the insert does not work.  The table fnmloan 
is a large table with 500mm + rows and the query runs for about 4 hours. 
Any idea of how to get this to work?  I am a little stumped since the 
query works with one loan.


Following up to Michael's answer... 500MM rows in 4 hours is an insert rate 
of 34,722.222... records per second.


--
Angular momentum makes the world go 'round.




Re: Help with insert query

2019-04-01 Thread Adrian Klaver

On 4/1/19 10:37 AM, Glenn Schultz wrote:

All,

The query below is designed to insert into a table.  This works when I 
have a single loan which I insert.  However, if remove the part of the 
where clause of a single loan the insert does not work.  The table 
fnmloan is a large table with 500mm + rows and the query runs for about 
4 hours.  Any idea of how to get this to work?  I am a little stumped 
since the query works with one loan.


If you pull the SELECT portion of the query out and run it without 
restricting the loanseqnum and use EXPLAIN ANALYZE how many rows do you 
get and what does the EXPLAIN show?




Glenn

SET max_parallel_workers_per_gather = 8;
SET random_page_cost = 1;

truncate fnmloan_balance;
insert into fnmloan_balance (
fctrdt, loanseqnum, secmnem, beginbal, scheduled,endbal,smm
)

select
fnmloan.fctrdt
,fnmloan.loanseqnum
,fnmloan.secmnem
--,fnmloan.orignoterate
--,fnmloan.loanage
--,fnmloan.origloanamt
,fnmloan.currrpb as beginbal
,round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
   cast(fnmloan.remterm - 1 as numeric),
   cast(fnmloan.currrpb as numeric)),4)) as scheduled
,coalesce(endbal.currrpb,0) as endbal
,abs(round(
   cast((fnmloan.currrpb - coalesce(endbal.currrpb,0) - 
round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),

   cast(fnmloan.remterm - 1 as numeric),
   cast(fnmloan.currrpb as numeric)),4)) )/(fnmloan.currrpb - 
round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),

   cast(fnmloan.remterm - 1 as numeric),
   cast(fnmloan.currrpb as numeric)),4)) ) as numeric)
   ,4)) as SMM

from
(
  select * from fnmloan
  where
  fctrdt < '03-01-2019'
  and
  loanseqnum = '5991017042'
) as fnmloan


left outer join
(select
fctrdt - interval '1 month' as fctrdt
,loanseqnum
,orignoterate
,loanage
,origloanamt
,currrpb
from fnmloan
) as endbal

on fnmloan.loanseqnum = endbal.loanseqnum
and fnmloan.fctrdt = endbal.fctrdt




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Help with insert query

2019-04-01 Thread Michel Pelletier
Please reply-all to the list and not just me directly.

I didn't realize you meant the table was empty when you did the full
insert.  As Adrian pointed out, run your select using explain, it will show
you why you are producing no rows.  Looking at your query just
superficially, the outer join looks suspicious, maybe using a subqery to
get the ending balance is a better approach.




On Mon, Apr 1, 2019 at 11:02 AM Glenn Schultz  wrote:

> Hi Michael,
>
> I will try that.  What I don’t understand is why, when using just one loan
> the insert is successful but when working with the whole table once the
> query is done there is nothing inserted into the table.
>
> Best,
> Glenn
>
> Sent from my iPhone
>
>
> On Apr 1, 2019, at 1:55 PM, Michel Pelletier 
> wrote:
>
> On Mon, Apr 1, 2019 at 10:37 AM Glenn Schultz  wrote:
>
>> All,
>>
>> The query below is designed to insert into a table.  This works when I
>> have a single loan which I insert.  However, if remove the part of the
>> where clause of a single loan the insert does not work.  The table fnmloan
>> is a large table with 500mm + rows and the query runs for about 4 hours.
>> Any idea of how to get this to work?  I am a little stumped since the query
>> works with one loan.
>>
>>
> Inserting one row is fast, inserting 500 million rows is going to take
> quite a bit longer.  I suggest your break your query up into batches, and
> insert, say, 1 million rows at a time.  Also it might be a good idea to
> drop your indexes on the target table and re-create them after you do the
> bulk insert, and also do an 'ANALYZE' on the target table after you have
> inserted all the records.
>
> -Michel
>
>
>
>> Glenn
>>
>> SET max_parallel_workers_per_gather = 8;
>> SET random_page_cost = 1;
>>
>> truncate fnmloan_balance;
>> insert into fnmloan_balance (
>> fctrdt, loanseqnum, secmnem, beginbal, scheduled,endbal,smm
>> )
>>
>> select
>> fnmloan.fctrdt
>> ,fnmloan.loanseqnum
>> ,fnmloan.secmnem
>> --,fnmloan.orignoterate
>> --,fnmloan.loanage
>> --,fnmloan.origloanamt
>> ,fnmloan.currrpb as beginbal
>> ,round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
>>   cast(fnmloan.remterm - 1 as numeric),
>>   cast(fnmloan.currrpb as numeric)),4)) as scheduled
>> ,coalesce(endbal.currrpb,0) as endbal
>> ,abs(round(
>>   cast((fnmloan.currrpb - coalesce(endbal.currrpb,0) -
>> round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
>>   cast(fnmloan.remterm - 1 as numeric),
>>   cast(fnmloan.currrpb as numeric)),4)) )/(fnmloan.currrpb -
>> round(nullif(schedprin(cast(fnmloan.orignoterate as numeric),
>>   cast(fnmloan.remterm - 1 as numeric),
>>   cast(fnmloan.currrpb as numeric)),4)) ) as numeric)
>>   ,4)) as SMM
>>
>> from
>> (
>>  select * from fnmloan
>>  where
>>  fctrdt < '03-01-2019'
>>  and
>>  loanseqnum = '5991017042'
>> ) as fnmloan
>>
>>
>> left outer join
>> (select
>> fctrdt - interval '1 month' as fctrdt
>> ,loanseqnum
>> ,orignoterate
>> ,loanage
>> ,origloanamt
>> ,currrpb
>> from fnmloan
>> ) as endbal
>>
>> on fnmloan.loanseqnum = endbal.loanseqnum
>> and fnmloan.fctrdt = endbal.fctrdt
>>
>>


Re: logical replication - negative bitmapset member not allowed

2019-04-01 Thread Alvaro Herrera
On 2019-Apr-01, Tom Lane wrote:

> Tim Clarke  writes:
> > I'm getting this message every 5 seconds on a single-master,
> > single-slave replication of PG10.7->PG10.7 both on Centos. Its over the
> > 'net but otherwise seems to perform excellently. Any ideas what's
> > causing it and how to fix?
> 
> That'd certainly be a bug, but we'd need to reproduce it to fix it.
> What are you doing that's different from everybody else?  Can you
> provide any other info to narrow down the problem?

Maybe the replica identity of a table got set to a unique index on oid?
Or something else involving system columns?  (If replication is
otherwise working, the I suppose there's a separate publication that's
having the error; the first thing to isolate would be to see what tables
are involved in that publication).

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: stale WAL files?

2019-04-01 Thread Rene Romero Benavides
On Sat, Mar 30, 2019 at 5:03 PM Gmail  wrote:

>
>
> > On Mar 30, 2019, at 10:54 AM, Gmail  wrote:
> >
> >
>  On Mar 29, 2019, at 6:58 AM, Michael Paquier 
> wrote:
> >>>
> >>> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote:
> >>> This is pg10 so it's pg_wal.  ls -ltr
> >>>
> >>>
> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33
> >>> 00010CEA00B1
> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33
> >>> 00010CEA00B2
> >>>
> >>> ... 217 more on through to ...
> >>>
> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01
> >>> 00010CEA00E8
> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01
> >>> 00010CEA00E9
> >>> -rw---. 1 postgres postgres 16777216 Mar 28 09:46
> >>> 00010CEA000E
> > I’m now down to 208 Mar 16 WAL files so they are being processed (at
> least deleted).  I’ve taken a snapshot of the pg_wal dir such that I can
> see which files get processed. It’s none of the files I’ve listed previously
>
> Two more have been cleaned up.  001C and 001D generated at 16:38 Mar 16
>
>
>
> Please share your complete postgresql.conf file and the results from this
query:
SELECT * FROM pg_settings;
has someone in the past configured wal archiving?
You've ran out of disk space as this log message you shared states:
No space left on device
what's the output of df -h

-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/


Postgresql with nextcloud in Windows Server

2019-04-01 Thread 김준형
Hi Community, I have problem so I wanna help from PostgreSQL community.
My problem is using PostgreSQL with nextcloud(cloud system) and It can make
no more connection to server. I checked log files and knew reason.
PostgreSQL prevented to disconnecting connection but still tried to connect
PostgreSQL server. This problem blocks to connect server and can't stop
PostgreSQL service. What I can do is just restart the server and recover
PostgreSQL data(I don't know why some data disappear.).
Can I get some help about this problem?

My OS and PostgreSQL version is
PostgreSQL 10.5
Windows Server 2012 R2


Re: Table Export & Import

2019-04-01 Thread Sathish Kumar
Hi All,

Can you tell me a way for table replication or sync or to achieve minimal
downtime from dbserver1 to dbserver2 on Postgresql 9.5

Table Size: 160gb
4VCPU, 16gb RAM

On Tue, Apr 2, 2019, 12:19 AM Sathish Kumar  wrote:

> The table size is 160gb. We would like to move/copy this table fro one db
> server to another db server.
>
> On Tue, Apr 2, 2019, 12:17 AM Michel Pelletier 
> wrote:
>
>> On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar  wrote:
>>
>>> Hi Adrian,
>>> We are exporting live table data to a new database, so we need to stop
>>> our application until the export/import is completed. We would like to
>>> minimise this downtime.
>>>
>>
>> It's more complicated if you want to keep your application running and
>> writing to the db while migrating.  There are trigger-level replication
>> tools, like slony that can be used to stream changes to the new database,
>> and then you switch over once you get both of them to parity, but there are
>> some gotchas.  You said the db is only 160GB, it depend a lot on what kind
>> of schema we're talking about, but I imagine it wouldn't take long to just
>> take the downtime and do a normal pg_upgrade.
>>
>>
>>>
>>> On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver 
>>> wrote:
>>>
 On 3/31/19 11:09 PM, Sathish Kumar wrote:
 > Hi Team,
 >
 > We have a requirement to copy a table from one database server to
 > another database server. We are looking for a solution to achieve
 this
 > with lesser downtime on Prod. Can you help us with this?

 So what is creating the downtime now?

 In addition to other suggestions you might want to take a look at:

 https://www.postgresql.org/docs/9.5/postgres-fdw.html


 >
 > Table Size: 160GB
 > Postgresql Server Version: 9.5
 >
 >


 --
 Adrian Klaver
 adrian.kla...@aklaver.com

>>>


Re: stale WAL files?

2019-04-01 Thread Rene Romero Benavides
On Mon, Apr 1, 2019 at 6:30 PM Rene Romero Benavides <
rene.romer...@gmail.com> wrote:

>
> On Sat, Mar 30, 2019 at 5:03 PM Gmail  wrote:
>
>>
>>
>> > On Mar 30, 2019, at 10:54 AM, Gmail  wrote:
>> >
>> >
>>  On Mar 29, 2019, at 6:58 AM, Michael Paquier 
>> wrote:
>> >>>
>> >>> On Thu, Mar 28, 2019 at 09:53:16AM -0600, Rob Sargent wrote:
>> >>> This is pg10 so it's pg_wal.  ls -ltr
>> >>>
>> >>>
>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33
>> >>> 00010CEA00B1
>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 16:33
>> >>> 00010CEA00B2
>> >>>
>> >>> ... 217 more on through to ...
>> >>>
>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01
>> >>> 00010CEA00E8
>> >>> -rw---. 1 postgres postgres 16777216 Mar 16 17:01
>> >>> 00010CEA00E9
>> >>> -rw---. 1 postgres postgres 16777216 Mar 28 09:46
>> >>> 00010CEA000E
>> > I’m now down to 208 Mar 16 WAL files so they are being processed (at
>> least deleted).  I’ve taken a snapshot of the pg_wal dir such that I can
>> see which files get processed. It’s none of the files I’ve listed previously
>>
>> Two more have been cleaned up.  001C and 001D generated at 16:38 Mar 16
>>
>>
>>
>> Please share your complete postgresql.conf file and the results from this
> query:
> SELECT * FROM pg_settings;
> has someone in the past configured wal archiving?
> You've ran out of disk space as this log message you shared states:
> No space left on device
> what's the output of df -h
>
> --
> El genio es 1% inspiración y 99% transpiración.
> Thomas Alva Edison
> http://pglearn.blogspot.mx/
>
>
BTW , how spread apart are checkpoints happening? do you have stats on
that? maybe they're too spread apart and that's why WAL files cannot be
recycled rapidly enough?
-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/


Re: stale WAL files?

2019-04-01 Thread Gmail
I’m under fighting a nasty cold.  It may take a day or two for me to response 
to recent questions and suggestion.





Re: Table Export & Import

2019-04-01 Thread Rene Romero Benavides
Hi Sathish, as Michel Pelletier, pointed out, a trigger based approach (i.e
slony-I, bucardo) or the pg_logical plugin (requires server restart) is the
way to go, personally, I've worked with slony-I, the initial setup is
somewhat tricky, but it works, depending on how transactional the table,
storage, and network speed, we're talking about seconds of sync difference
worst case scenario.

On Mon, Apr 1, 2019 at 8:23 PM Sathish Kumar  wrote:

> Hi All,
>
> Can you tell me a way for table replication or sync or to achieve minimal
> downtime from dbserver1 to dbserver2 on Postgresql 9.5
>
> Table Size: 160gb
> 4VCPU, 16gb RAM
>
> On Tue, Apr 2, 2019, 12:19 AM Sathish Kumar  wrote:
>
>> The table size is 160gb. We would like to move/copy this table fro one db
>> server to another db server.
>>
>> On Tue, Apr 2, 2019, 12:17 AM Michel Pelletier <
>> pelletier.mic...@gmail.com> wrote:
>>
>>> On Mon, Apr 1, 2019 at 7:47 AM Sathish Kumar  wrote:
>>>
 Hi Adrian,
 We are exporting live table data to a new database, so we need to stop
 our application until the export/import is completed. We would like to
 minimise this downtime.

>>>
>>> It's more complicated if you want to keep your application running and
>>> writing to the db while migrating.  There are trigger-level replication
>>> tools, like slony that can be used to stream changes to the new database,
>>> and then you switch over once you get both of them to parity, but there are
>>> some gotchas.  You said the db is only 160GB, it depend a lot on what kind
>>> of schema we're talking about, but I imagine it wouldn't take long to just
>>> take the downtime and do a normal pg_upgrade.
>>>
>>>

 On Mon, Apr 1, 2019, 10:22 PM Adrian Klaver 
 wrote:

> On 3/31/19 11:09 PM, Sathish Kumar wrote:
> > Hi Team,
> >
> > We have a requirement to copy a table from one database server to
> > another database server. We are looking for a solution to achieve
> this
> > with lesser downtime on Prod. Can you help us with this?
>
> So what is creating the downtime now?
>
> In addition to other suggestions you might want to take a look at:
>
> https://www.postgresql.org/docs/9.5/postgres-fdw.html
>
>
> >
> > Table Size: 160GB
> > Postgresql Server Version: 9.5
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
El genio es 1% inspiración y 99% transpiración.
Thomas Alva Edison
http://pglearn.blogspot.mx/


Re: Postgresql with nextcloud in Windows Server

2019-04-01 Thread Adrian Klaver

On 4/1/19 6:21 PM, 김준형 wrote:

Hi Community, I have problem so I wanna help from PostgreSQL community.
My problem is using PostgreSQL with nextcloud(cloud system) and It can 
make no more connection to server. I checked log files and knew reason. 
PostgreSQL prevented to disconnecting connection but still tried to 
connect PostgreSQL server. This problem blocks to connect server and 
can't stop PostgreSQL service. What I can do is just restart the server 
and recover PostgreSQL data(I don't know why some data disappear.).

Can I get some help about this problem?


Not sure I entirely understand the above, so to help can you answer the 
following:


1) The Postgres server is running on nextcloud, correct?

2) The client you are using to connect to the server is on nextcloud 
also or somewhere else?


3) The log files do not show a problem, is this correct?

4) Do you see an error message on the client end when you try to connect?

5) Have you looked at your pg_hba.conf to see if allows connections from 
your client?


6) Not sure what you mean when you say you cannot stop the service, but 
that you can restart it?


7) How do you know that data has disappeared?



My OS and PostgreSQL version is
PostgreSQL 10.5
Windows Server 2012 R2




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Gigantic load average spikes

2019-04-01 Thread Rene Romero Benavides
On Mon, Apr 1, 2019 at 10:35 AM rihad  wrote:

> On 04/01/2019 08:30 PM, Michel Pelletier wrote:
>
>
>
> On Sun, Mar 31, 2019 at 10:49 PM David Rowley <
> david.row...@2ndquadrant.com> wrote:
>
>>
>> Perhaps a bunch of processes waiting on the access exclusive lock on
>> the materialized view being released?
>>
>> log_lock_waits might help you if the MV takes more than a second to
>> refresh, otherwise, you might need to have a look at ungranted locks
>> in pg_locks and see if the number of locks spikes during the refresh.
>>
>
> I think David's got the right idea here.  Like he said, investigate
> pg_locks, if it is the refresh materialized view, you can avoid the problem
> by doing 'REFRESH MATERIALIZED VIEW CONCURRENTLY'.  You will need at least
> one unique index on the table.
>
>
> It is actually refreshed concurrently.
>
>
>
>> --
>>  David Rowley   http://www.2ndQuadrant.com/
>>  PostgreSQL Development, 24x7 Support, Training & Services
>>
>>
>>
> Hi. How many vcores does the server have? what's the load average we're
talking about? do you mind sharing your postgresql configuration?

--


Fwd: Postgresql with nextcloud in Windows Server

2019-04-01 Thread 김준형
보낸사람: 김준형 
Date: 2019년 4월 2일 (화) 오후 2:02
Subject: Re: Postgresql with nextcloud in Windows Server
To: Adrian Klaver 


Thanks for your reply and I hope this answers can help your questions

1) The nextcloud is running on PostgreSQL server. Cloud system needs
PostgreSQL server.

2) Nextcloud system try to connect PostgreSQL server all time.
2019-03-27 20:46:59.396 LOG:  connection received: host=xxx.xxx.xxx.xxx
port=
2019-03-27 20:46:59.403 LOG:  connection authorized: user=user_name
database=db_name
2019-03-27 20:46:59.463 LOG:  disconnection: session time: 0:00:00.067
user=user_name database=db_name host=xxx.xxx.xxx.xxx port=
this connections repeat almost per 10sec.
Other clients well... use this PostgreSQL but not so much.(almost 30 people
use this PostgreSQL include nextcloud system users)


3) Yes. log files doesn't shows problems clearly. I just checked log files
and saw difference when server couldn't connected.
2019-03-27 20:46:59.396 LOG:  connection received: host=xxx.xxx.xxx.xxx
port=
2019-03-27 20:46:59.403 LOG:  connection authorized: user=user_name
database=db_name
this log repeated and no disconnection log.

4) After problem occur, if try to connect to windows server, windows remote
access shows time-out error. PostgreSQL server also too.

5) Before the server doesn't work normally, there is no problem to use
PostgreSQL (even nextcloud system, too.)

6) No, It doesn't work. PostgreSQL service status doesn't changed.

7) When I restart server, I check PostgreSQL data and I see some schema
data disappeared(only data). log files says 'db system was not properly
shut down' so 'automatic recover in progress' when PostgreSQL server
started after Windows Server get restarted.
I think this 'not properly shut down' causes windows server cold booting.(I
need to turn on the server quickly for some reason and my server spends a
lot of time to restart.)

2019년 4월 2일 (화) 오후 1:21, Adrian Klaver 님이 작성:

> On 4/1/19 6:21 PM, 김준형 wrote:
> > Hi Community, I have problem so I wanna help from PostgreSQL community.
> > My problem is using PostgreSQL with nextcloud(cloud system) and It can
> > make no more connection to server. I checked log files and knew reason.
> > PostgreSQL prevented to disconnecting connection but still tried to
> > connect PostgreSQL server. This problem blocks to connect server and
> > can't stop PostgreSQL service. What I can do is just restart the server
> > and recover PostgreSQL data(I don't know why some data disappear.).
> > Can I get some help about this problem?
>
> Not sure I entirely understand the above, so to help can you answer the
> following:
>
> 1) The Postgres server is running on nextcloud, correct?
>
> 2) The client you are using to connect to the server is on nextcloud
> also or somewhere else?
>
> 3) The log files do not show a problem, is this correct?
>
> 4) Do you see an error message on the client end when you try to connect?
>
> 5) Have you looked at your pg_hba.conf to see if allows connections from
> your client?
>
> 6) Not sure what you mean when you say you cannot stop the service, but
> that you can restart it?
>
> 7) How do you know that data has disappeared?
>
> >
> > My OS and PostgreSQL version is
> > PostgreSQL 10.5
> > Windows Server 2012 R2
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: CVE-2019-9193 about COPY FROM/TO PROGRAM

2019-04-01 Thread Michael Paquier
On Mon, Apr 01, 2019 at 10:04:32AM -0400, Jonathan S. Katz wrote:
> +1, though I’d want to see if people get noisier about it before we rule
> out an official response.
> 
> A blog post from a reputable author who can speak to security should
> be good enough and we can make noise through our various channels.

Need a hand?  Not sure if I am reputable enough though :)

By the way, it could be the occasion to consider an official
PostgreSQL blog on the main website.  News are not really a model
adapted for problem analysis and for entering into technical details.
--
Michael


signature.asc
Description: PGP signature