work_mem and shared_buffers question

2020-02-10 Thread Ayub M
   1. shared_buffers - In a regular PostgreSQL installation, say I am
   allocating 25% of my memory to shared_buffers that means it leaves 75% for
   rest such as OS, page cache and work_mems etc. Is my understanding correct?
   If so, AWS Aurora for Postgres uses 75% of memory for shared_buffers, then
   it would leave just 25% for other things?
   2. Does the memory specified for work_mem, fully gets allocated to all
   sessions irrespective of whether they do any sorting or hashing operations?


-- 
Regards,
Ayub


Re: work_mem and shared_buffers question

2020-02-10 Thread Naresh g
1. Yes,  by default it is advisable to set 25% of RAM to shared buffers
because community version postgres highly depends on OS cache,  which means
when you fetch something from disk, it is first copied to OS cache (75%)
then to shared buffers(25%).
Two copies of data will be there in your system RAM.
The copy is called buffered IO.

Amazon Aurora eliminates this buffered IO,  hence it is not required you
stick to restrict 25% of RAM.

2. Work_mem is just setting,  if you sort something out your session uses
than memory other wise it just lies at OS.

On Mon, 10 Feb, 2020, 1:34 PM Ayub M,  wrote:

>
>1. shared_buffers - In a regular PostgreSQL installation, say I am
>allocating 25% of my memory to shared_buffers that means it leaves 75% for
>rest such as OS, page cache and work_mems etc. Is my understanding correct?
>If so, AWS Aurora for Postgres uses 75% of memory for shared_buffers, then
>it would leave just 25% for other things?
>2. Does the memory specified for work_mem, fully gets allocated to all
>sessions irrespective of whether they do any sorting or hashing operations?
>
>
> --
> Regards,
> Ayub
>


Re: work_mem and shared_buffers question

2020-02-10 Thread Ayub M
Thanks, for q2 - if work_mem is a limit before spilling onto disk, is there
a min amount of memory which gets allocated to each session when it starts?

On Mon, Feb 10, 2020 at 3:51 AM Naresh g  wrote:

> 1. Yes,  by default it is advisable to set 25% of RAM to shared buffers
> because community version postgres highly depends on OS cache,  which means
> when you fetch something from disk, it is first copied to OS cache (75%)
> then to shared buffers(25%).
> Two copies of data will be there in your system RAM.
> The copy is called buffered IO.
>
> Amazon Aurora eliminates this buffered IO,  hence it is not required you
> stick to restrict 25% of RAM.
>
> 2. Work_mem is just setting,  if you sort something out your session uses
> than memory other wise it just lies at OS.
>
> On Mon, 10 Feb, 2020, 1:34 PM Ayub M,  wrote:
>
>>
>>1. shared_buffers - In a regular PostgreSQL installation, say I am
>>allocating 25% of my memory to shared_buffers that means it leaves 75% for
>>rest such as OS, page cache and work_mems etc. Is my understanding 
>> correct?
>>If so, AWS Aurora for Postgres uses 75% of memory for shared_buffers, then
>>it would leave just 25% for other things?
>>2. Does the memory specified for work_mem, fully gets allocated to
>>all sessions irrespective of whether they do any sorting or hashing
>>operations?
>>
>>
>> --
>> Regards,
>> Ayub
>>
>

-- 
Regards,
Ayub


Re: work_mem and shared_buffers question

2020-02-10 Thread Naresh g
No,  as I said,  as per my observation your connection will not use your
work mem unless there is a sort happening,

but


Yes,  every connection uses certain amount of RAM,  some documents say it 5
MB and some say it is 1.9 MB.

Expert review may be needed on my observation.

On Mon, 10 Feb, 2020, 4:00 PM Ayub M,  wrote:

> Thanks, for q2 - if work_mem is a limit before spilling onto disk, is
> there a min amount of memory which gets allocated to each session when it
> starts?
>
> On Mon, Feb 10, 2020 at 3:51 AM Naresh g  wrote:
>
>> 1. Yes,  by default it is advisable to set 25% of RAM to shared buffers
>> because community version postgres highly depends on OS cache,  which means
>> when you fetch something from disk, it is first copied to OS cache (75%)
>> then to shared buffers(25%).
>> Two copies of data will be there in your system RAM.
>> The copy is called buffered IO.
>>
>> Amazon Aurora eliminates this buffered IO,  hence it is not required you
>> stick to restrict 25% of RAM.
>>
>> 2. Work_mem is just setting,  if you sort something out your session uses
>> than memory other wise it just lies at OS.
>>
>> On Mon, 10 Feb, 2020, 1:34 PM Ayub M,  wrote:
>>
>>>
>>>1. shared_buffers - In a regular PostgreSQL installation, say I am
>>>allocating 25% of my memory to shared_buffers that means it leaves 75% 
>>> for
>>>rest such as OS, page cache and work_mems etc. Is my understanding 
>>> correct?
>>>If so, AWS Aurora for Postgres uses 75% of memory for shared_buffers, 
>>> then
>>>it would leave just 25% for other things?
>>>2. Does the memory specified for work_mem, fully gets allocated to
>>>all sessions irrespective of whether they do any sorting or hashing
>>>operations?
>>>
>>>
>>> --
>>> Regards,
>>> Ayub
>>>
>>
>
> --
> Regards,
> Ayub
>


Pre-version pg_upgrade syntax check

2020-02-10 Thread Rich Shepard

Running Slackware-14.2/x86_64. Current installation is postgres-11.5 and I'm
upgrading to postgresql-12.1. Both versions are installed and stopped.

If I have correctly read the pg_upgrade manual page this is the command to
use (as user postgres) with lines split to fit the message; the command will
be on a single line:

pg_upgrade \
-b /usr/lib64/postgresql/11/bin/ \
-B /usr/lib64/postgesql/12/bin/ \
-d /var/lib/pgsql/11/data/ \
-D /var/lib/pgsql/12/data/ \
-p 5432 \
-P 5432

I don't know that the port numbers are required.

Have I missed an option?

Rich




Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Rich Shepard

On Mon, 10 Feb 2020, Rich Shepard wrote:


-B /usr/lib64/postgesql/12/bin/ \


Typo: it should be /usr/lib64/postgresql/12/bin/

Rich




Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Adrian Klaver

On 2/10/20 1:59 PM, Rich Shepard wrote:
Running Slackware-14.2/x86_64. Current installation is postgres-11.5 and 
I'm

upgrading to postgresql-12.1. Both versions are installed and stopped.

If I have correctly read the pg_upgrade manual page this is the command to
use (as user postgres) with lines split to fit the message; the command 
will

be on a single line:

pg_upgrade \
-b /usr/lib64/postgresql/11/bin/ \
-B /usr/lib64/postgesql/12/bin/ \
-d /var/lib/pgsql/11/data/ \
-D /var/lib/pgsql/12/data/ \
-p 5432 \
-P 5432

I don't know that the port numbers are required.


They are not:

https://www.postgresql.org/docs/12/pgupgrade.html

"Obviously, no one should be accessing the clusters during the upgrade. 
pg_upgrade defaults to running servers on port 50432 to avoid unintended 
client connections. You can use the same port number for both clusters 
when doing an upgrade because the old and new clusters will not be 
running at the same time. However, when checking an old running server, 
the old and new port numbers must be different."





Have I missed an option?


Not a required one. Best way to find out is to run --check.



Rich





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Rich Shepard

On Mon, 10 Feb 2020, Adrian Klaver wrote:


I don't know that the port numbers are required.

They are not: https://www.postgresql.org/docs/12/pgupgrade.html
"Obviously, no one should be accessing the clusters during the upgrade.
pg_upgrade defaults to running servers on port 50432 to avoid unintended
client connections. You can use the same port number for both clusters
when doing an upgrade because the old and new clusters will not be running
at the same time. However, when checking an old running server, the old
and new port numbers must be different."


Adrian,

As I'm the only user of the databases, and both are shut down, I assumed
specifying port numbers was not needed.


Have I missed an option?

Not a required one. Best way to find out is to run --check.


Good advice; I'll do that.

Thanks,

Rich




Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Rich Shepard

On Mon, 10 Feb 2020, Rich Shepard wrote:


Good advice; I'll do that.


Huh! Not what I expected:

postgres@salmo:~$ pg_upgrade -c -b /usr/lib64/postgresql/11/bin/ -B
/usr/lib64/postgresql/12/bin/ -d /var/lib/pgsql/11/data -D
/var/lib/pgsql/12/data/

(All on one line; wrapped by alpine.)

Performing Consistency Checks
-
Checking cluster versions 
Old cluster data and binary directories are from different major versions.

Failure, exiting

Well, of course they're from different major versions, that's why I'm
running pg_upgrade. What did I do incorrectly?

Rich




Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Adrian Klaver

On 2/10/20 3:40 PM, Rich Shepard wrote:

On Mon, 10 Feb 2020, Rich Shepard wrote:


Good advice; I'll do that.


Huh! Not what I expected:

postgres@salmo:~$ pg_upgrade -c -b /usr/lib64/postgresql/11/bin/ -B
/usr/lib64/postgresql/12/bin/ -d /var/lib/pgsql/11/data -D
/var/lib/pgsql/12/data/

(All on one line; wrapped by alpine.)

Performing Consistency Checks
-
Checking cluster versions Old cluster data and binary directories are 
from different major versions.

Failure, exiting

Well, of course they're from different major versions, that's why I'm
running pg_upgrade. What did I do incorrectly?


What it is saying is -b and -d are pointing at binary and data 
directories that are incompatible. You need to make sure that:


/usr/lib64/postgresql/11/bin/
and
/var/lib/pgsql/11/data

are actually pointing at 11 instances of binaries and data respectively.



Rich





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Rich Shepard

On Mon, 10 Feb 2020, Adrian Klaver wrote:


What it is saying is -b and -d are pointing at binary and data directories
that are incompatible. You need to make sure that:

/usr/lib64/postgresql/11/bin/
and
/var/lib/pgsql/11/data

are actually pointing at 11 instances of binaries and data respectively.


Adrian,

I just copied the contents of those directories from last Thursday's
backups. Still fails.

I have the pg_dumpall .sql file from last Thursday. I'll fire up
postgresql-12.1 and us pgsql to read in that file, unless you have a
recommendation for me to apply.

Thanks,

Rich




Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-10 Thread Jerry Sievers
Marc  writes:

> Adrian, Christoph, Tom,
>
> We identified as the problem being persistent on all tables with many
> records ( +600K ) and they all had a JSONB column ( we feel that
> might be related )

Did you remember to re-analyze all tables after importing the data?

Autovac probably will have done it for you for objects non-trivial in
size, but it's worth asking.

Such an omission could certainly result in poor exec plans, large memory
use and in turn automated intervention.

FWIW



>
> Luckily we were able to downgraded to version 11.6 with the same
> system MacOS 10.14.6 so that the OS impact can ruled out.
>
> We will keep the 12.1 in place so that we can run additional tests to
> assist to pin-point the issue.
>
> Feel free to ask but allow us to recover from these hectic days ;-)
>
> Many thanks for the help !
>
>
>
> Marc
>
>
> On 8 Feb 2020, at 21:09, Nick Renders wrote:
>
> Hi,
>
> We have just upgraded our Postgres 9.6 database to 12.1
> (pg_dumpall -> pg_restore on a clean installation) and now we are
> having some issues with one of our tables.
>
> When we do the following statement:
>
> SELECT * FROM f_gsxws_schedule WHERE UPPER(gwsc_dossier) = 'TEST'
>
> the Postgres service restarts.
>
> It seems that using UPPER() in the WHERE clause is causing this.
> The same statement without UPPER() works just fine.
>
> I have tried to emulate the issue with other tables, but
> f_gsxws_schedule seems to be the only one.
> The table also has another character field that is indexed, and
> the same problem occurs there. Whenever we use UPPER() or LOWER()
> to do a case-insensitive search, the service reboots.
>
> Looking at the table's definition, I don't see anything different
> with the other tables.
>
> Here is what is logged:
>
> 2020-02-08 20:21:19.942 CET [83892] LOG: server process (PID
> 85456) was terminated by signal 9: Killed: 9
> 2020-02-08 20:21:19.942 CET [83892] DETAIL: Failed process was
> running: SELECT * FROM f_gsxws_schedule WHERE UPPER(gwsc_dossier)
> = 'TEST'
> 2020-02-08 20:21:19.942 CET [83892] LOG: terminating any other
> active server processes
> 2020-02-08 20:21:19.943 CET [85364] WARNING: terminating
> connection because of crash of another server process
> 2020-02-08 20:21:19.943 CET [85364] DETAIL: The postmaster has
> commanded this server process to roll back the current
> transaction and exit, because another server process exited
> abnormally and possibly corrupted shared memory.
> 2020-02-08 20:21:19.943 CET [85364] HINT: In a moment you should
> be able to reconnect to the database and repeat your command.
> 2020-02-08 20:21:19.943 CET [85360] WARNING: terminating
> connection because of crash of another server process
> 2020-02-08 20:21:19.943 CET [85360] DETAIL: The postmaster has
> commanded this server process to roll back the current
> transaction and exit, because another server process exited
> abnormally and possibly corrupted shared memory.
> 2020-02-08 20:21:19.943 CET [85360] HINT: In a moment you should
> be able to reconnect to the database and repeat your command.
> 2020-02-08 20:21:19.943 CET [85269] WARNING: terminating
> connection because of crash of another server process
> 2020-02-08 20:21:19.943 CET [85269] DETAIL: The postmaster has
> commanded this server process to roll back the current
> transaction and exit, because another server process exited
> abnormally and possibly corrupted shared memory.
> 2020-02-08 20:21:19.943 CET [85269] HINT: In a moment you should
> be able to reconnect to the database and repeat your command.
> 2020-02-08 20:21:19.946 CET [83892] LOG: all server processes
> terminated; reinitializing
> 2020-02-08 20:21:19.988 CET [85686] LOG: database system was
> interrupted; last known up at 2020-02-08 20:20:48 CET
> 2020-02-08 20:21:20.658 CET [85686] LOG: database system was not
> properly shut down; automatic recovery in progress
> 2020-02-08 20:21:20.662 CET [85686] LOG: redo starts at C/
> B99B45A0
> 2020-02-08 20:21:20.662 CET [85686] LOG: invalid record length at
> C/B99B4688: wanted 24, got 0
> 2020-02-08 20:21:20.662 CET [85686] LOG: redo done at C/B99B4650
> 2020-02-08 20:21:20.675 CET [83892] LOG: database system is ready
> to accept connections
>
>
> Has anyone noticed anything like this before? Any idea how to fix
> this?
>
>
> Best regards,
>
> Nick Renders
>
>
>
> ARC - your Apple Authorised ServiceH.D. Saviolaan 8
> partner
>B-1700 Dilbeek  
>Belgium 
> i...@arcict.com 

Re: Pre-version pg_upgrade syntax check

2020-02-10 Thread Adrian Klaver

On 2/10/20 4:07 PM, Rich Shepard wrote:

On Mon, 10 Feb 2020, Adrian Klaver wrote:

What it is saying is -b and -d are pointing at binary and data 
directories

that are incompatible. You need to make sure that:

/usr/lib64/postgresql/11/bin/
and
/var/lib/pgsql/11/data

are actually pointing at 11 instances of binaries and data respectively.


Adrian,

I just copied the contents of those directories from last Thursday's
backups. Still fails.


So you already have 11 and 12 instances of Postgres running?

If so why use pg_upgrade?

To verify what is going on do:

/usr/lib64/postgresql/11/bin/psql --version

vi /var/lib/pgsql/11/data/PG_VERSION



I have the pg_dumpall .sql file from last Thursday. I'll fire up
postgresql-12.1 and us pgsql to read in that file, unless you have a
recommendation for me to apply.

Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-10 Thread Thomas Munro
On Mon, Feb 10, 2020 at 4:35 AM Marc  wrote:
> We will keep the 12.1 in place so that we can run additional tests to assist 
> to pin-point the issue.
>
> Feel free to ask but allow us to recover from these hectic days ;-)

Here's how to get a stack so we can see what it was doing, assuming
you have the Apple developer tools installed:

1.  Find the PID of the backend you're connected to with SELECT
pg_backend_pid().
2.  "lldb -p PID" from a shell to attach to the process, then "cont"
to let it continue running.
3.  Run the query in that backend and wait for the SIGKILL.
4.  In the lldb session, type "bt".

It'll only make sense if your PostgreSQL build has debug symbols, but let's see.