overcommit_ratio setting

2021-06-14 Thread Yi Sun
Hello,

After checking doc, only mentioned vm.overcommit_memory=2, but didn't
mentioned vm.overcommit_ratio recommended value

https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT

some articles mentioned that 80 or 90 configuration in their env

So is it OK just to configure  vm.overcommit_ratio to 90 please?

Thank you


pg_dumpall --exclude-database case folding, was Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-14 Thread Andrew Dunstan


On 6/10/21 2:23 PM, Andrew Dunstan wrote:
> On 6/10/21 2:00 PM, Tom Lane wrote:
>> "Dean Gibson (DB Administrator)"  writes:
>>> On 2021-06-10 09:54, Ranier Vilela wrote:
 Your cmd lacks =
 =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql
>>> I read that before posting, but missed that.  Old command line patterns 
>>> die hard!
>>> However, the result was the same:  3.5GB before running out of space.
>> [ experiments... ]  Looks like you gotta do it like this:
>>
>>  pg_dumpall '--exclude-database="MailPen"' ...
>>
>> This surprises me, as I thought it was project policy not to
>> case-fold command-line arguments (precisely because you end
>> up needing weird quoting to prevent that).
>>
>>  
>
>
> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used
> the same logic that we use for pg_dump's --exclude-* options, so we need
> to check if they have similar issues.
>
>



Peter Eisentraut has pointed out to me that this is documented, albeit a
bit obscurely for pg_dumpall. But it is visible on the pg_dump page.


Nevertheless, it's a bit of a POLA violation as we've seen above, and
I'd like to get it fixed, if there's agreement, both for this pg_dumpall
option and for pg_dump's pattern matching options.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: pg_dumpall --exclude-database case folding, was Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-14 Thread Tom Lane
Andrew Dunstan  writes:
> On 6/10/21 2:23 PM, Andrew Dunstan wrote:
>> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used
>> the same logic that we use for pg_dump's --exclude-* options, so we need
>> to check if they have similar issues.

> Peter Eisentraut has pointed out to me that this is documented, albeit a
> bit obscurely for pg_dumpall. But it is visible on the pg_dump page.

Hmm.

> Nevertheless, it's a bit of a POLA violation as we've seen above, and
> I'd like to get it fixed, if there's agreement, both for this pg_dumpall
> option and for pg_dump's pattern matching options.

+1, but the -performance list isn't really where to hold that discussion.
Please start a thread on -hackers.

regards, tom lane




Re: overcommit_ratio setting

2021-06-14 Thread Julien Rouhaud
On Mon, Jun 14, 2021 at 06:16:35PM +0800, Yi Sun wrote:
> 
> So is it OK just to configure  vm.overcommit_ratio to 90 please?

This parameter entirely depends on the amount of RAM and swap you have on your
server, and how much memory you want to be allocable.

See https://www.kernel.org/doc/Documentation/vm/overcommit-accounting.

It's usually a good practice to not allow more than your RAM to be alloced, and
let a at least a bit of memory non allocable to make sure that you keep at
least some OS cache, but that's up to you.




Re: overcommit_ratio setting

2021-06-14 Thread Laurenz Albe
On Mon, 2021-06-14 at 18:16 +0800, Yi Sun wrote:
> After checking doc, only mentioned vm.overcommit_memory=2, but didn't 
> mentioned vm.overcommit_ratio recommended value
> https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
> some articles mentioned that 80 or 90 configuration in their env
> So is it OK just to configure  vm.overcommit_ratio to 90 please?

It depends on the size of RAM and swap space:

overcommit_ratio < (RAM - swap) / RAM * 100

Here, RAM is the RAM available to PostgreSQL.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Planning performance problem (67626.278ms)

2021-06-14 Thread Manuel Weitzman
> However, I'm skeptical that any problem actually remains in
> real-world use cases.

Hello Tom,

We also had some issues with planning and get_actual_variable_range(). We
actually found some interesting behaviour that probably requires an eye with
better expertise in how the planner works.
For the example being discussed you can add some joins into the equation and
planning times deteriorate quite a bit.
I'll just skip posting the first executions as it is already established that
a subsequent one will be faster.


create table b (b int primary key, a int references a(a))
with (autovacuum_enabled=off);

insert into a select x from generate_series(1,1000) x;
insert into b select x, x from generate_series(1,1000) x;
create index b_a_idx on b(a);
analyze a, b;


For our case a rollback of a bulk insert causes bloat on the index.


begin;
insert into a select x from generate_series(1001,2000) x;
rollback;

explain (analyze, buffers)
select * from a
join b on (b.a = a.a)
where b.a in (1,100,1,100,101);

 Planning:
   Buffers: shared hit=9 read=27329
 Planning Time: 134.560 ms
 Execution Time: 0.100 ms


I see a lot of buffers being read for some reason (wasn't this fixed?). And
times are slow too. But it get's worse with each join added to the select.


explain (analyze, buffers)
select * from a
join b b1 on (b1.a = a.a)
join b b2 on (b2.a = a.a)
where b1.a in (1,100,1,100,101);

 Planning:
   Buffers: shared hit=38 read=81992
 Planning Time: 312.826 ms
 Execution Time: 0.131 ms

Just add a few more joins and it is a recipe for disaster.
Apparently, the planner isn't reusing the data boundaries across alternative
plans. It would be nicer if the planner remembered each column boundaries
for later reuse (within the same planner execution).

Another thing that worries me is that even the second run has faster planning
it is still way slower than the case without lots of bloat in the index. And
I don't think this is just an edge case. Rollbacks on bulk inserts can be
quite common, and joins are expected in a SQL database.

We had downtime due to how the planner works on this case. Unfortunately
setting more aggressive vacuum settings won't fix our problems. Most of the
read queries are being issued to a replica. When the issues with the planner
start happening, CPU usage on that node goes to 100% which interferes with the
replication process.
This means the replica cannot get to a new checkpoint with a new live
max value in the index nor can it delete the bloat that vacuum has already
cleaned on the leader server.

Oh, by the way, we're running version 13.2


Regards,

Manuel





Master - Slave Replication Window Server

2021-06-14 Thread Haseeb Khan
Hello Everyone !

I trust that you guys are keep doing very Well ! 

Does anyone have complete documentation to configure PostgreSQL V13 Master and 
Slave on Windows Server
and also how to test Manual Failover ?

Would highly appreciated, if someone could help in this regard.

Br,
Haseeb Ahmad


Re: Master - Slave Replication Window Server

2021-06-14 Thread Rory Campbell-Lange
On 15/06/21, Haseeb Khan ([email protected]) wrote:
> Does anyone have complete documentation to configure PostgreSQL V13 Master
> and Slave on Windows Server and also how to test Manual Failover ?

I suggest having a look at 
https://www.postgresql.org/docs/13/high-availability.html

The server administration documentation at 
https://www.postgresql.org/docs/13/admin.html has some Windows-specific guides.

Rory




Re: overcommit_ratio setting

2021-06-14 Thread Yi Sun
> On Mon, 2021-06-14 at 18:16 +0800, Yi Sun wrote:
> > After checking doc, only mentioned vm.overcommit_memory=2, but didn't
> mentioned vm.overcommit_ratio recommended value
> >
> https://www.postgresql.org/docs/11/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT
> > some articles mentioned that 80 or 90 configuration in their env
> > So is it OK just to configure  vm.overcommit_ratio to 90 please?
>
> It depends on the size of RAM and swap space:
>
> overcommit_ratio < (RAM - swap) / RAM * 100
>
> Here, RAM is the RAM available to PostgreSQL.
>

Thank you for your reply

1. Our env RAM are 4GB, 8 GB, 16 GB... as below url suggestion, could we
configure swap as below?
https://opensource.com/article/18/9/swap-space-linux-systems

RAM swap

2GB – 8GB   = RAM
>8GB8GB

2. If the RAM is 4GB and 8GB, the formula  (RAM - swap) / RAM * 100 result
will become to 0, how could we configure overcommit_ratio please?

Thank you


Re: Master - Slave Replication Window Server

2021-06-14 Thread Haseeb Khan
Thankyou Rory !

Br,
Haseeb Ahmad

> On 15-Jun-2021, at 12:42 AM, Rory Campbell-Lange  
> wrote:
> 
> On 15/06/21, Haseeb Khan ([email protected]) wrote:
>> Does anyone have complete documentation to configure PostgreSQL V13 Master
>> and Slave on Windows Server and also how to test Manual Failover ?
> 
> I suggest having a look at 
> https://www.postgresql.org/docs/13/high-availability.html
> 
> The server administration documentation at 
> https://www.postgresql.org/docs/13/admin.html has some Windows-specific 
> guides.
> 
> Rory




Re: Master - Slave Replication Window Server

2021-06-14 Thread Haseeb Khan
Hello Rory,

Hope you're doing well !

I have confusion below, Should we create an archive path on the standby
server and then set it to recovery.conf file ?


restore_command = 'cp /path/to/archive/%f %p'


*BR,*
Haseeb Ahmad



On Tue, Jun 15, 2021 at 10:42 AM Haseeb Khan  wrote:

> Thankyou Rory !
>
> Br,
> Haseeb Ahmad
>
> > On 15-Jun-2021, at 12:42 AM, Rory Campbell-Lange <
> [email protected]> wrote:
> >
> > On 15/06/21, Haseeb Khan ([email protected]) wrote:
> >> Does anyone have complete documentation to configure PostgreSQL V13
> Master
> >> and Slave on Windows Server and also how to test Manual Failover ?
> >
> > I suggest having a look at
> https://www.postgresql.org/docs/13/high-availability.html
> >
> > The server administration documentation at
> https://www.postgresql.org/docs/13/admin.html has some Windows-specific
> guides.
> >
> > Rory
>