Re: Where **not** to use PostgreSQL?

2019-03-04 Thread Thomas Kellerer
Thomas Güttler schrieb am 28.02.2019 um 12:47:
> where would you suggest someone to **not** use PostgreSQL?
> 
> Why would you do this?
> 
> What alternative would you suggest instead?

Due to the MVCC implementation, I would hesitate to use Postgres in 
environments that have an extremely high and constant rate of DELETE an UPDATE 
statements, e.g. several thousands or even tens of thousands transactions per 
second without any "quiet" times where vacuum could "catch up" clean out dead 
tuples. 

Tuning autovacuum to cope with that is very challenging 
(at least until the new UNDO log implementation is ready ;) 







Re: Performance comparison between Pgsql 10.5 and Pgsql 11.2

2019-03-04 Thread Nicola Contu
I did a analyze in stages on both.
And Yes both are compiled.
This is the configure command (change 10.6 for PG10)

./configure --prefix=/usr/local/pgsql11.2

See attached perf report. The difference seems to be all in this line, but
not sure :

+   26.80% 0.00%   222  postmaster   [kernel.kallsyms]
  [k] system_call_fastpath



I am using CentOS 7
With Centos I am using this profile for tuned-adm
[root@STAGING-CMD1 ~]#  tuned-adm active
Current active profile: latency-performance


Il giorno sab 2 mar 2019 alle ore 20:41 Thomas Munro 
ha scritto:

> On Sat, Mar 2, 2019 at 5:02 AM Ray O'Donnell  wrote:
> > On 01/03/2019 15:01, Nicola Contu wrote:
> > > Hello,
> > > is there any reason why I am getting worse results using pgsql11.2 in
> > > writing comparing it with pgsql 10.6?
> > >
> > > I have two Instances, both just restored, so no bloats.
> > > Running read queries I have pretty much same results, a little bit
> > > better on pg11- Running writes the difference is in favour of 10.
> >
> > Did you run ANALYZE on the databases after restoring?
>
> If you can rule out different query plans, and if you compiled them
> both with the same compiler and optimisation levels and without
> cassert enabled (it's a long shot but I mentioned that because you
> showed a path in /usr/local so perhaps you're hand-compiling 11, but
> 10 came from a package?), then the next step might be to use a
> profiler like "perf" (or something equivalent on your OS) to figure
> out where 11 is spending more time in the write test?
>
> --
> Thomas Munro
> https://enterprisedb.com
>
PGSQL11

+   26.80% 0.00%   222  postmaster   [kernel.kallsyms]  
  [k] system_call_fastpath
+   13.46% 0.04%   751  postmaster   [kernel.kallsyms]  
  [k] sys_futex
+   13.43% 0.00%   782  postmaster   [kernel.kallsyms]  
  [k] do_futex
+   12.89% 0.00% 0  postmaster   [unknown]  
  [k] 
+9.09% 0.00% 0  swapper  [kernel.kallsyms]  
  [k] start_cpu
+9.09% 0.14%  2860  swapper  [kernel.kallsyms]  
  [k] cpu_startup_entry
+9.09% 0.00% 0  swapper  [kernel.kallsyms]  
  [k] start_secondary
+8.97% 0.00%   389  postmaster   [kernel.kallsyms]  
  [k] page_fault
+8.97% 0.00%   687  postmaster   [kernel.kallsyms]  
  [k] do_page_fault
+8.97% 0.73%  6471  postmaster   [kernel.kallsyms]  
  [k] __do_page_fault
+8.66% 0.14%  1648  postmaster   libpthread-2.17.so 
  [.] sem_post@@GLIBC_2.2.5
+8.40% 0.70% 13567  postmaster   [kernel.kallsyms]  
  [k] get_futex_key
+8.08% 0.00% 0  postmaster   [unknown]  
  [k] 0x0080
+8.08% 0.93%   642  postmaster   libpthread-2.17.so 
  [.] do_futex_wait.constprop.1
+7.87% 0.19%  2666  postmaster   [kernel.kallsyms]  
  [k] futex_wake
+7.68% 0.19%  9343  postmaster   [kernel.kallsyms]  
  [k] handle_mm_fault
+7.34% 0.00%   207  swapper  [kernel.kallsyms]  
  [k] arch_cpu_idle
+7.34% 0.00%   979  swapper  [kernel.kallsyms]  
  [k] cpuidle_idle_call
+7.15% 0.00%  2671  postmaster   [kernel.kallsyms]  
  [k] handle_pte_fault
+5.87% 0.00% 6  postmaster   libc-2.17.so   
  [.] __GI___setsid
+5.87% 0.00% 9  postmaster   [kernel.kallsyms]  
  [k] sys_setsid
+5.87% 0.00%15  postmaster   [kernel.kallsyms]  
  [k] sched_autogroup_create_attach
+5.41% 5.41% 48199  postmaster   postgres   
  [.] hash_search_with_hash_value
+5.40% 5.40% 44439  postmaster   postgres   
  [.] LWLockWaitListLock
+5.38% 0.72%  1425  postmaster   [kernel.kallsyms]  
  [k] futex_wait
+5.07% 0.00% 5  postmaster   [kernel.kallsyms]  
  [k] sched_create_group
+4.23% 0.59%  2253  postmaster   [kernel.kallsyms]  
  [k] __slab_alloc
+4.13% 0.00%  2330  postmaster   [kernel.kallsyms]  
  [k] kmem_cache_alloc_node_trace
+3.75% 0.16%  2919  postmaster   [kernel.kallsyms]  
  [k] unlock_page
+3.67% 3.67% 49595  postmaster   postgres   
  [.] LWLockAttemptLock
+3.64% 0.33%   

Re: Performance comparison between Pgsql 10.5 and Pgsql 11.2

2019-03-04 Thread Imre Samu
> is there any reason why I am getting worse results using pgsql11.2 in
writing comparing it with pgsql 10.6?
>... And Yes both are compiled.

Why 10.6?

according to release notes
"14th February 2019: PostgreSQL 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21
Released!"  https://www.postgresql.org/about/news/1920/
imho:* it would be better to compare PG11.2  with  PG10.7  *(  similar bug
Fixes and Improvements + same fsync()  behavior )

*"This release changes the behavior in how PostgreSQL interfaces with
fsync() and includes fixes for partitioning and over 70 other bugs that
were reported over the past three months"*

Imre



Nicola Contu  ezt írta (időpont: 2019. márc. 4., H,
13:14):

> I did a analyze in stages on both.
> And Yes both are compiled.
> This is the configure command (change 10.6 for PG10)
>
> ./configure --prefix=/usr/local/pgsql11.2
>
> See attached perf report. The difference seems to be all in this line, but
> not sure :
>
> +   26.80% 0.00%   222  postmaster   [kernel.kallsyms]
> [k] system_call_fastpath
>
>
>
> I am using CentOS 7
> With Centos I am using this profile for tuned-adm
> [root@STAGING-CMD1 ~]#  tuned-adm active
> Current active profile: latency-performance
>
>
> Il giorno sab 2 mar 2019 alle ore 20:41 Thomas Munro <
> thomas.mu...@gmail.com> ha scritto:
>
>> On Sat, Mar 2, 2019 at 5:02 AM Ray O'Donnell  wrote:
>> > On 01/03/2019 15:01, Nicola Contu wrote:
>> > > Hello,
>> > > is there any reason why I am getting worse results using pgsql11.2 in
>> > > writing comparing it with pgsql 10.6?
>> > >
>> > > I have two Instances, both just restored, so no bloats.
>> > > Running read queries I have pretty much same results, a little bit
>> > > better on pg11- Running writes the difference is in favour of 10.
>> >
>> > Did you run ANALYZE on the databases after restoring?
>>
>> If you can rule out different query plans, and if you compiled them
>> both with the same compiler and optimisation levels and without
>> cassert enabled (it's a long shot but I mentioned that because you
>> showed a path in /usr/local so perhaps you're hand-compiling 11, but
>> 10 came from a package?), then the next step might be to use a
>> profiler like "perf" (or something equivalent on your OS) to figure
>> out where 11 is spending more time in the write test?
>>
>> --
>> Thomas Munro
>> https://enterprisedb.com
>>
>


Re: Performance comparison between Pgsql 10.5 and Pgsql 11.2

2019-03-04 Thread Nicola Contu
Because I have 10.6 in production :) and I am comparing with what I will be
loosing.
And I read that in the release notes but as said in my first email, even
with data_sync_retry=on (going back to previous behavior) doesn't make any
difference.

So I am looking for something that will keep my performances but still
allows me to upgrade to 11 in production.
Also, trying with 11.1, the problem seems still there.

Il giorno lun 4 mar 2019 alle ore 14:45 Imre Samu  ha
scritto:

> > is there any reason why I am getting worse results using pgsql11.2 in
> writing comparing it with pgsql 10.6?
> >... And Yes both are compiled.
>
> Why 10.6?
>
> according to release notes
> "14th February 2019: PostgreSQL 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21
> Released!"  https://www.postgresql.org/about/news/1920/
> imho:* it would be better to compare PG11.2  with  PG10.7  *(  similar
> bug Fixes and Improvements + same fsync()  behavior )
>
> *"This release changes the behavior in how PostgreSQL interfaces with
> fsync() and includes fixes for partitioning and over 70 other bugs that
> were reported over the past three months"*
>
> Imre
>
>
>
> Nicola Contu  ezt írta (időpont: 2019. márc. 4.,
> H, 13:14):
>
>> I did a analyze in stages on both.
>> And Yes both are compiled.
>> This is the configure command (change 10.6 for PG10)
>>
>> ./configure --prefix=/usr/local/pgsql11.2
>>
>> See attached perf report. The difference seems to be all in this line,
>> but not sure :
>>
>> +   26.80% 0.00%   222  postmaster   [kernel.kallsyms]
>> [k] system_call_fastpath
>>
>>
>>
>> I am using CentOS 7
>> With Centos I am using this profile for tuned-adm
>> [root@STAGING-CMD1 ~]#  tuned-adm active
>> Current active profile: latency-performance
>>
>>
>> Il giorno sab 2 mar 2019 alle ore 20:41 Thomas Munro <
>> thomas.mu...@gmail.com> ha scritto:
>>
>>> On Sat, Mar 2, 2019 at 5:02 AM Ray O'Donnell  wrote:
>>> > On 01/03/2019 15:01, Nicola Contu wrote:
>>> > > Hello,
>>> > > is there any reason why I am getting worse results using pgsql11.2 in
>>> > > writing comparing it with pgsql 10.6?
>>> > >
>>> > > I have two Instances, both just restored, so no bloats.
>>> > > Running read queries I have pretty much same results, a little bit
>>> > > better on pg11- Running writes the difference is in favour of 10.
>>> >
>>> > Did you run ANALYZE on the databases after restoring?
>>>
>>> If you can rule out different query plans, and if you compiled them
>>> both with the same compiler and optimisation levels and without
>>> cassert enabled (it's a long shot but I mentioned that because you
>>> showed a path in /usr/local so perhaps you're hand-compiling 11, but
>>> 10 came from a package?), then the next step might be to use a
>>> profiler like "perf" (or something equivalent on your OS) to figure
>>> out where 11 is spending more time in the write test?
>>>
>>> --
>>> Thomas Munro
>>> https://enterprisedb.com
>>>
>>


Re: Update does not move row across foreign partitions in v11

2019-03-04 Thread Derek Hans
Based on a reply to reporting this as a bug, moving rows out of foreign
partitions is not yet implemented so this is behaving as expected. There's
a mention of this limitation in the Notes section of the Update docs.

On Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera 
wrote:

> On 2019-Feb-22, Derek Hans wrote:
>
> > I've set up 2 instances of PostgreSQL 11. On instance A, I created a
> table
> > with 2 local partitions and 2 partitions on instance B using foreign data
> > wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.
> > Inserting rows into this table works as expected, with rows ending up in
> > the appropriate partition. However, updating those rows only moves them
> > across partitions in some of the situations:
> >
> >- From local partition to local partition
> >- From local partition to foreign partition
> >
> > Rows are not moved
> >
> >- From foreign partition to local partition
> >- From foreign partition to foreign partition
> >
> > Is this the expected behavior? Am I missing something or configured
> > something incorrectly?
>
> Sounds like a bug to me.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


-- 
*Derek*
+1 (415) 754-0519 | derek.h...@gmail.com | Skype: derek.hans


Re: Performance comparison between Pgsql 10.5 and Pgsql 11.2

2019-03-04 Thread Tom Lane
Nicola Contu  writes:
> See attached perf report. The difference seems to be all in this line, but
> not sure :
> +   26.80% 0.00%   222  postmaster   [kernel.kallsyms]
>   [k] system_call_fastpath

That would suggest that many more kernel calls are happening, which is
something you could usefully investigate with strace, perhaps.

regards, tom lane



Re: support for JSON Web Token

2019-03-04 Thread Eugen Stan
Hi Michel,

Thanks for the reply. I will pitch in my view. My background comes from
developing (mainly Java based) applications:

La 03.03.2019 22:18, Michel Pelletier a scris:
> On Sat, Mar 2, 2019 at 4:09 PM Eugen Stan  > wrote:
>
> Hi,
>
> I would like to advocate forJSON web token support in PostgreSQL.
>
> JWT tokens are used in a lot of web applications and I think there are
> some very nice use cases for passing the token down to PostgreSQL.
>
>
> pgjwt author here.  While I do agree with you that a direct
> integration would simplify a few things, the idea doesn't really bring
> enough benefit to overcome some downsides.
>  
>
> Some of the first thing that comes to mind is pairing it with row
> level
> security to implement authorization policies and data access
> directly in DB.
>
>
> It's possible to do this now, tools like PostgREST and Postgraphile
> use a pattern where they log in as an Anonymous user but when they get
> a valid JWT, they SET ROLE to the correct user.  It's also possible to
> inspect jwt claims in RLS policy checks via variables that are set on
> each query.  It's worth noting that neither of these projects use
> pgjwt but rather use the libraries that come with their runtimes.
Thanks for the info. I heard about the projects but I am not familiar
with their inner workings. It's nice to know there are people working on
this.
>
> But more abstractly how would this integration work?  Postgres does
> not speak HTTP, you'd have to extend the client protocol.  That's
> gonna be a huge hurdle since its a large change to a lot of code
> outside of postgres itself.

I know PostgreSQL does not speek HTTP and I don't intend to do something
wild as extending the client protocol. Though, that could be a good
research project.

My idea is not that far from what PostgREST and Postgraphile are doing.
I am probably going to check their solution out in more detail.
The idea is to SET a JWT _*access_token*_ (not the role) for each query.
JWT has a simple, well defined format and I believe it is quite stable.

The JWT contains the user identity and claims about the user: what roles
the user belongs to, what permissions he has, etc.

The RLS policy can use those claims to make access decisions.

JWT support is not actually needed if the JWT parsing is done by the app
and the claims are SET on the query directly. 


>  
>
> I've did a bit of research and I found some prior work in [pgjwt] and
> [postgres-jwt-prototype] .
>
> It seems to be pretty easy to support since most of the work is done:
> PostgreSQL has crypto support via pgcrypto and has native JSON
> handling
> functions.
>
>
> pgcrypto only supports secret key signing with hmac, but it doesn't
> support public key signing.  pgjwt was always just an expedient tool,
> for me at least, to use as an early adopter of PostgREST.  It is nice
> to be able to generate valid jwts from the db with a secret.  But for
> any more serious use it's going to need not only work to pgcrypto or a
> gratuitous self-plug for pgsodium
> (https://github.com/michelp/pgsodium) to support public key signing
> and it's going to need some eyeballs from real security people.  I
> wrote them and I personally wouldn't use pgjwt (other than trival key
> generation) or pgsodium for production use without some more buy in
> from serious security people.  That's huge hurdle #2.

Since the app can do mostly anything in the DB I don't see a need to
validate things on Postgres side. Nice job on pgsodium.

>  
>
> JWT is a widely used technology, especially in web applications.
>
> This is why I think PostgreSQL should adopt a JWT extension in
> contrib.
>
> I believe PostgreSQL will benefit, especially in the web apps use.
>
>
> What do you think?
>
> What would it take for a JWT extension to be added to postgresql
> contrib
> or the official postgresql extensions ?
>
>
> I like your enthusiasm for the idea, but it has some serious hurdles
> noted.  In the mean time, if you're idea is to get working quickly
> with postgres and JWT, I would look at PostgREST and Postgraphile, and
> I hear Hasura is quite cool too although I don't have any firsthand
> experience with it.

Thanks. I will check out both PostgREST and Postgraphile . I've worked a
bit with join-monster to put GraphQL API on PostgreSQL but it did not
cover all the things that we needed.

>
> -Michel
>  
>
>
>
> Thanks,
>
> Eugen
>
> [pgjwt] https://github.com/michelp/pgjwt
>
> [postgres-jwt-prototype]
> https://github.com/larsar/postgres-jwt-prototype
>
>
>





Re: Optimizing Database High CPU

2019-03-04 Thread Jeff Janes
On Wed, Feb 27, 2019 at 5:01 PM Michael Lewis  wrote:

> If those 50-100 connections are all active at once, yes, that is high.
>> They can easily spend more time fighting each other over LWLocks,
>> spinlocks, or cachelines rather than doing useful work.  This can be
>> exacerbated when you have multiple sockets rather than all cores in a
>> single socket.  And these problems are likely to present as high Sys times.
>>
>> Perhaps you can put up a connection pooler which will allow 100
>> connections to all think they are connected at once, but forces only 12 or
>> so to actually be active at one time, making the others transparently queue.
>>
>
> Can you expound on this or refer me to someplace to read up on this?
>

Just based on my own experimentation.  This is not a blanket
recommendation,  but specific to the situation that we already suspect
there is contention, and the server is too old to have
pg_stat_actvity.wait_event
column.


> Context, I don't want to thread jack though: I think I am seeing similar
> behavior in our environment at times with queries that normally take
> seconds taking 5+ minutes at times of high load. I see many queries showing
> buffer_mapping as the LwLock type in snapshots but don't know if that may
> be expected.
>

It sounds like your processes are fighting to reserve buffers in
shared_buffers in which to read data pages.  But those data pages are
probably already in the OS page cache, otherwise reading it from disk would
be slow enough that you would be seeing some type of IO wait, or buffer_io,
rather than buffer_mapping as the dominant wait type.  So I think that
means you have most of your data in RAM, but not enough of it in
shared_buffers.  You might be in a rare situation where setting
shared_buffers to a high fraction of RAM, rather than the usual low
fraction, is called for.  Increasing NUM_BUFFER_PARTITIONS might also be
useful, but that requires a recompilation of the server.  But do these
spikes correlate with anything known at the application level?  A change in
the mix of queries, or a long report or maintenance operation?  Maybe the
query plans briefly toggle over to using seq scans rather than index scans
or vice versa, which drastically changes the block access patterns?


> In our environment PgBouncer will accept several hundred connections and
> allow up to 100 at a time to be active on the database which are VMs with
> ~16 CPUs allocated (some more, some less, multi-tenant and manually
> sharded). It sounds like you are advocating for connection max very close
> to the number of cores. I'd like to better understand the pros/cons of that
> decision.
>

There are good reasons to allow more than that.  For example, your
application holds some transactions open briefly while it does some
cogitation on the application-side, rather than immediately committing and
so returning the connection to the connection pool.  Or your server has a
very high IO capacity and benefits from lots of read requests in the queue
at the same time, so it can keep every spindle busy and every rotation
productive.  But, if you have no reason to believe that any of those
situations apply to you, but do have evidence that you have lock contention
between processes, then I think that limiting the number active processes
to the number of cores is a good starting point.

Cheers,

Jeff


Re: Performance comparison between Pgsql 10.5 and Pgsql 11.2

2019-03-04 Thread Thomas Munro
On Tue, Mar 5, 2019 at 5:01 AM Nicola Contu  wrote:
> Attached a part of the strace running the pgbench command for pg11
> Also attached strace_10 for pg10.6.

That looks like strace output from pgbench, and I don't see any
interesting differences between v10 and v11 (though I'm surprised to
see it using poll() instead of ppoll(), and opening a new connection
for every transaction).

How about syscalls on the server side?  You could start it with
something like "strace -f path/to/postgres -D path/to/pgdata" (-f for
follow children), and perhaps also use -c so that it shows aggregated
data (up until you ^C it) instead of every syscall?

-- 
Thomas Munro
https://enterprisedb.com



Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Perumal Raj
Hi Justin

I could see bunch of functions under reorg schema.

AS '$libdir/pg_reorg', 'reorg_disable_autovacuum';
AS '$libdir/pg_reorg', 'reorg_get_index_keys';
AS '$libdir/pg_reorg', 'reorg_apply';
AS '$libdir/pg_reorg', 'reorg_drop';
AS '$libdir/pg_reorg', 'reorg_indexdef';
AS '$libdir/pg_reorg', 'reorg_swap';
AS '$libdir/pg_reorg', 'reorg_trigger';
AS '$libdir/pg_reorg', 'reorg_version';

I am not sure about the impact of these functions if i drop .

Are these functions seeded ( default) one ?

Regards,
Raj


On Sun, Mar 3, 2019 at 7:38 PM Perumal Raj  wrote:

> Thanks.Will decently try that option and keep you posted.
>
> Thanks again for redirecting to right group.
>
>
> Perumal Raju
>
> On Sun, Mar 3, 2019, 6:51 AM Justin Pryzby  wrote:
>
>> Moving to -general list (-hackers is for development topics like proposed
>> patches and patch reviews and beta testing and crash reports).
>>
>> On Thu, Feb 28, 2019 at 10:13:58AM -0800, Perumal Raj wrote:
>> > could not load library "$libdir/pg_reorg":
>> > ERROR:  could not access file "$libdir/pg_reorg": No such file or
>> directory
>>
>> As Sergei said, you can run pg_dump -s and look for references to reorg,
>> and
>> drop them.
>>
>> Or, you could try this:
>> CREATE EXTENSION pg_reorg FROM unpackaged;
>>
>> Or maybe this:
>> CREATE EXTENSION pg_repack FROM unpackaged;
>>
>> If that works, you can DROP EXTENSION pg_repack;
>>
>> Otherwise, I think you can maybe do something like:
>> DROP SCHEMA pg_repack CASCADE; -- or,
>> DROP SCHEMA pg_reorg CASCADE;
>>
>> Please send output of: \dn
>>
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Justin Pryzby
On Mon, Mar 04, 2019 at 01:37:30PM -0800, Perumal Raj wrote:
> I could see bunch of functions under reorg schema.

Those functions are the ones preventing you from upgrading.
You should drop schema pg_reorg cascade.
You can run it in a transaction first to see what it will drop.
But after the upgrade, you can CREATE EXTENSION pg_repack, which is a fork of
pg_reorg, which is itself no longer maintained.

Justin



master-> 2 hot standbys

2019-03-04 Thread Julie Nishimura
Hello,
Our current master 9.2 has two active standbys. Can you please help me out with 
the right sequence of events if we would like to promote one of current 
standbys to master and convert master to standby?

Your help is appreciated.

Thanks,
Julie


From: Julie Nishimura
Sent: Thursday, February 28, 2019 6:00 PM
To: pgsql-general@lists.postgresql.org
Subject: validation of hot standby

Hello,
We set up our hot standby by putting master into archive mode and issuing 
pg_basebackup command, and we are about to start our hot standby instance. What 
are the things you suggest for validation (assuming it will come up cleanly) 
and replication caught up?

What do I need to check in addition, making sure the data is clean and not 
corrupted?

thanks for your help


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Perumal Raj
Hi Justin

Does it mean that these functions are default and came with 9.2 ?
I am wondering how these functions are created in the DB as the
library($libdir/pg_reorg)  is not exists in system

Note:
My schema name is reorg not pg_reorg




On Mon, Mar 4, 2019 at 1:45 PM Justin Pryzby  wrote:

> On Mon, Mar 04, 2019 at 01:37:30PM -0800, Perumal Raj wrote:
> > I could see bunch of functions under reorg schema.
>
> Those functions are the ones preventing you from upgrading.
> You should drop schema pg_reorg cascade.
> You can run it in a transaction first to see what it will drop.
> But after the upgrade, you can CREATE EXTENSION pg_repack, which is a fork
> of
> pg_reorg, which is itself no longer maintained.
>
> Justin
>


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Justin Pryzby
On Mon, Mar 04, 2019 at 02:21:11PM -0800, Perumal Raj wrote:
> Does it mean that these functions are default and came with 9.2 ?
> I am wondering how these functions are created in the DB as the
> library($libdir/pg_reorg)  is not exists in system

I don't think it's default.
But was probably installed by running some SQL script.

It tentatively sounds safe to me to drop, but you should take a backup and
inspect and double check your pg_dump output and output of "begin; drop schema
pgreorg cascade".

Justin



Re: master-> 2 hot standbys

2019-03-04 Thread Ben Chobot

> On Mar 4, 2019, at 1:59 PM, Julie Nishimura  wrote:
> 
> Hello,
> Our current master 9.2 has two active standbys. Can you please help me out 
> with the right sequence of events if we would like to promote one of current 
> standbys to master and convert master to standby? 

It depends on how you've set things up, of course, but generally you can run 
"pg_ctl promote" on one of your slaves. This will make that slave start a new 
timeline and begin to act as a master. To restore your old master to a standby, 
you will need to rebuild it with something like pg_basebackup, as I do not 
believe pg_replay is supported with 9.2.

https://www.postgresql.org/docs/9.5/warm-standby-failover.html

Terminating client connections on the old master to avoid splitbrain and then 
making those clients reconnect to your new master is left as an exercise to the 
reader - how you do it really depends on your environment - but be sure you do 
it right, because splitbrain is a real danger. In a similar vein, you don't 
want two masters trying to save the same wals to the same wal archive. 

Also, if your two slaves are using asynchronous replication, be aware that the 
slave you choose to become the new master has a chance to be behind the other 
slave in wal replay when you promote it, unless you take precautions. This 
would mean your other slave won't be able to sync up with the newly promoted 
master.

VACUUM FREEZE and replication lag

2019-03-04 Thread Martín Fernández
Hello everyone,

We have a very big table in our pg92 database that requires a manual vacuum 
freeze in order to keep sane number of transaction ids available. Yesterday we 
did a vacuum freeze on this table that took roughly 9 hours. After performing 
the operation we got back roughly 0.5 billion transaction ids. I was wondering 
how much this operation can affect replication lag? If vacuum freeze needs to 
mark a lot of tuples to the xfrozenid, will this mean that a lot of informaton 
is going to be stremead in the way files? Just trying understand the relation. 
As soon as we started the operation, our standbys that have the lower io got 
significantly impacted for several hours. 

Thanks!

Martín

Re: master-> 2 hot standbys

2019-03-04 Thread Julie Nishimura
Thank you. I made a mistake, our version is PostgreSQL 9.6.2


From: Ben Chobot 
Sent: Monday, March 4, 2019 5:08 PM
To: Julie Nishimura
Cc: pgsql-general@lists.postgresql.org
Subject: Re: master-> 2 hot standbys


On Mar 4, 2019, at 1:59 PM, Julie Nishimura 
mailto:juliez...@hotmail.com>> wrote:

Hello,
Our current master 9.2 has two active standbys. Can you please help me out with 
the right sequence of events if we would like to promote one of current 
standbys to master and convert master to standby?

It depends on how you've set things up, of course, but generally you can run 
"pg_ctl promote" on one of your slaves. This will make that slave start a new 
timeline and begin to act as a master. To restore your old master to a standby, 
you will need to rebuild it with something like pg_basebackup, as I do not 
believe pg_replay is supported with 9.2.

https://www.postgresql.org/docs/9.5/warm-standby-failover.html

Terminating client connections on the old master to avoid splitbrain and then 
making those clients reconnect to your new master is left as an exercise to the 
reader - how you do it really depends on your environment - but be sure you do 
it right, because splitbrain is a real danger. In a similar vein, you don't 
want two masters trying to save the same wals to the same wal archive.

Also, if your two slaves are using asynchronous replication, be aware that the 
slave you choose to become the new master has a chance to be behind the other 
slave in wal replay when you promote it, unless you take precautions. This 
would mean your other slave won't be able to sync up with the newly promoted 
master.


Re: Question about pg_upgrade from 9.2 to X.X

2019-03-04 Thread Sergei Kornilov
Hi

seems this is unpackaged extension, usually installed prior 9.1 release. Maybe 
reorg even does not support "create extension" syntax. That was long ago and 
project homepage is unavailable now. pg_repack documentation mention "support 
for PostgreSQL 9.2 and EXTENSION packaging" as improvements.

> Are these functions seeded ( default) one ?

No its not default.

regards, Sergei