Fwd: amazon aroura config - seriously overcommited defaults? (May be Off Topic)

2018-12-08 Thread Square Bob
This question is probably more of a fit for the performance list, sorry 
for the cross post




 Forwarded Message 
Subject: 	amazon aroura config - seriously overcommited defaults? (May 
be Off Topic)

Date:   Sat, 8 Dec 2018 12:00:33 -0700
From:   Square Bob 
To: [email protected]



All;


My apologies if this is off topic.


Our company is moving to Aurora, In the past I would take care not to 
allow postgresql to over-commit memory beyond the actual memory on the 
server, which meant I would add the buffer pool + (work_mem * 
max_connections) + (maintenance_work_mem * autovacuum threads)



However as I look at the aroura defaults they are all off the charts, 
for example, based on the calculations in the config (amazon doesn't 
make it easy, some settings are in pages, some are in kb, some are who 
knows what) I see the following settings as default in our aroura config:



The instance size is db.r4.xlarge


this instance size is listed as having 30.5GB of ram


Here's the default settings:


shared_buffers: {DBInstanceClassMemory/10922}

which equates to 24GB


work_mem:   64000 (kb)

which equates to 65.5MB


maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)

which equates to 4.2GB


max_connections: LEAST({DBInstanceClassMemory/9531392},5000)

which equates to 3,380


According to my math (If I got it right)  in a worst case scenario,

if we maxed out max_connections, work_mem and maintenance_work_mem limits

the db would request 247GB of memory


Additionally amazon has set effective_cache_size =
{DBInstanceClassMemory/10922}

which equates to about 2.9MB (which given the other outlandish setting 
may be the only appropriate setting in the system)




What the hell is amazon doing here? Am I missing the boat on tuning 
postgresql memory? Is amazon simply counting on the bet that users will 
never fully utilize an instance?



Thanks in advance





Re: amazon aroura config - seriously overcommited defaults? (May be Off Topic)

2018-12-08 Thread Bob Lunney
Aurora doesn’t use a typical file system so the RAM usually reserved for the OS 
file system bufffer cache is instead used for shared_buffers. 

We run multiple Aurora/ PG instances and they work quite well.  There are 
limitations in superuser private, so be aware of that, but generally speaking 
Aurora/PG works well. 

Bob Lunney

Sent from my PDP11

> On Dec 8, 2018, at 2:03 PM, Square Bob  wrote:
> 
> This question is probably more of a fit for the performance list, sorry for 
> the cross post
> 
> 
> 
>  Forwarded Message 
> Subject:  amazon aroura config - seriously overcommited defaults? (May be 
> Off Topic)
> Date: Sat, 8 Dec 2018 12:00:33 -0700
> From: Square Bob 
> To:   [email protected]
> 
> 
> All;
> 
> 
> My apologies if this is off topic.
> 
> 
> Our company is moving to Aurora, In the past I would take care not to allow 
> postgresql to over-commit memory beyond the actual memory on the server, 
> which meant I would add the buffer pool + (work_mem * max_connections) + 
> (maintenance_work_mem * autovacuum threads)
> 
> 
> However as I look at the aroura defaults they are all off the charts, for 
> example, based on the calculations in the config (amazon doesn't make it 
> easy, some settings are in pages, some are in kb, some are who knows what) I 
> see the following settings as default in our aroura config:
> 
> 
> The instance size is db.r4.xlarge
> 
> 
> this instance size is listed as having 30.5GB of ram
> 
> 
> Here's the default settings:
> 
> 
> shared_buffers: {DBInstanceClassMemory/10922}
> 
> which equates to 24GB
> 
> 
> work_mem:   64000 (kb)
> 
> which equates to 65.5MB
> 
> 
> maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)
> 
> which equates to 4.2GB
> 
> 
> max_connections: LEAST({DBInstanceClassMemory/9531392},5000)
> 
> which equates to 3,380
> 
> 
> According to my math (If I got it right)  in a worst case scenario,
> 
> if we maxed out max_connections, work_mem and maintenance_work_mem limits
> 
> the db would request 247GB of memory
> 
> 
> Additionally amazon has set effective_cache_size =
> {DBInstanceClassMemory/10922}
> 
> which equates to about 2.9MB (which given the other outlandish setting may be 
> the only appropriate setting in the system)
> 
> 
> 
> What the hell is amazon doing here? Am I missing the boat on tuning 
> postgresql memory? Is amazon simply counting on the bet that users will never 
> fully utilize an instance?
> 
> 
> Thanks in advance
> 
> 
> 


Re: amazon aroura config - seriously overcommited defaults? (May be Off Topic)

2018-12-08 Thread Andres Freund
On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:
> On RDS PostgreSQL, the default is 25% of your server memory. This seems
> to be pretty widely accepted as a good starting point on PostgreSQL.

FWIW, I think it's widely cited, but also bad advice.  25% for a OLTP
workload on a 1TB machine with a database size above 25% is a terrible
idea.



Re: amazon aroura config - seriously overcommited defaults? (May be Off Topic)

2018-12-08 Thread Rob Sargent



> On Dec 8, 2018, at 3:12 PM, Andres Freund  wrote:
> 
> On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:
>> On RDS PostgreSQL, the default is 25% of your server memory. This seems
>> to be pretty widely accepted as a good starting point on PostgreSQL.
> 
> FWIW, I think it's widely cited, but also bad advice.  25% for a OLTP
> workload on a 1TB machine with a database size above 25% is a terrible
> idea.
> 

Sorry, could you please expand “database size above 25%”?  25% of what?

rjs




Re: amazon aroura config - seriously overcommited defaults? (May be Off Topic)

2018-12-08 Thread Andres Freund
On 2018-12-08 15:23:19 -0800, Rob Sargent wrote:
> 
> 
> > On Dec 8, 2018, at 3:12 PM, Andres Freund  wrote:
> > 
> > On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:
> >> On RDS PostgreSQL, the default is 25% of your server memory. This seems
> >> to be pretty widely accepted as a good starting point on PostgreSQL.
> > 
> > FWIW, I think it's widely cited, but also bad advice.  25% for a OLTP
> > workload on a 1TB machine with a database size above 25% is a terrible
> > idea.
> > 
> 
> Sorry, could you please expand “database size above 25%”?  25% of what?

Memory available to postgres (i.e. 100% of the server's memory on a
server dedicated to postgres, less if it's shared duty).

Greetings,

Andres Freund