Questions about the Debian Package version of pgAdmin

2025-01-05 Thread Nick


I sent this to the PgAdmin list but didn't get any replies. Not sure
how many people are on that list, but since these questions are general
enough, I figure someone on the main list will know:

I'm using the apt repository version for Debian 12 (Bookworm). I'm also
using Ansible to manage the server, and I have some questions about the
pgAdmin Debian package version.

1. If we're using the pgAdmin DEB repo, will `apt-update` always update
pgAdmin to the latest major version? ie. Will we be doing an upgrade
every four weeks?

2. Should `sudo /usr/pgadmin4/bin/setup-web.sh` be run every time `apt-
update` runs, or only once?

3. Since the package is also installing Apache for us and configuring
it, is there a safe way for us to modify the Apache config without
breaking updates?

We'd like to enable TLS for Apache, but in a way that doesn't break
anything each time the setup playbook runs and updates PgAdmin.

Thanks,
Nick









Re: Questions about the Debian Package version of pgAdmin

2025-01-05 Thread Nick
On Sun, 2025-01-05 at 22:18 -0500, Ron Johnson wrote:
> On Sun, Jan 5, 2025 at 7:27 PM Nick  wrote:
> > 
> > I have some questions about the
> > pgAdmin Debian package version.
> > 
> > 1. If we're using the pgAdmin DEB repo, will `apt-update` always
> > update
> > pgAdmin to the latest major version? ie. Will we be doing an
> > upgrade
> > every four weeks?
> > 
> 
> Debian's Stable repos never (well, very very rarely) upgrade the
> version.  It's always bug fixes, and why Stable software can get
> s old.
>  
> 

This was in reference to the PgAdmin repository, not the official
Debian package repository. I didn't know if the PgAdmin repo
maintainers followed the same rule of one major version of PgAdmin per
version of Debian, or if the PgAdmin repo was always the latest
version, since PgAdmin only supports rolling releases.








Re: Initial Postgres admin account setup using Ansible?

2024-12-31 Thread Nick
> 
> On Tue, Dec 31, 2024 at 10:32 PM Nick  wrote:
> > 
> > I'm trying to create an Ansible playbook that sets up and manages
> > Postgres on Debian 12.
> > 
> > I'm having issues with the default username/login structure, and
> > could
> > use some help.
> > 
> > I'm installing the `postgresql` package via apt, and Debian creates
> > a
> > `postgres` system account that has a locked password.
> > 
> > I can login to Postgres manually by first becoming root then
> > running
> > `sudo -u postgres psql` as root. But when the Ansible user (which
> > has
> > passwordless sudo) tries to run `sudo -u postgres psql`, I get:
> > 
> > "Sorry, user Ansible is not allowed to execute '/usr/bin/psql' as
> > postgres on example.com."
> > 
> > This is likely because the postgres POSIX account has a locked
> > password, so only root can become postgres. Other users with sudo
> > permissions can't become a locked account.
> > 
> > So I **could** unlock the `postgres` POSIX account, but I
> > understand
> > that this account is locked for a reason.
> > 
> > The goal is to have Ansible manage the creation of databases and
> > roles
> > in the Postgres database.
> > 
> > So I need to create an account in Postgres that Ansible can use as
> > the
> > super user. I would like to do this in a way that doesn't require
> > me to
> > manually login to the server, become root, become postgres as root,
> > then manually create an Ansible role.
> > 
> > What is the proper (secure) way to let the Ansible POSIX user
> > manage
> > postgres? It seems there should be a fully automated way to
> > bootstrap
> > an Ansible user for `postgres`.
> > 
> 

I think I found a working solution:

In `pg_hba.conf`, change:

```
local   all postgrespeer
```

to:

```
localall all peer map=ansible_map
```


In `pg_ident.conf`, add:

```
ansible_map ansible postgres
ansible_map postgrespostgres

```

Then in the playbook, don't become (stay as `ansible`):

```
- name: Ping PostgreSQL
  postgresql_ping:
db: postgres
login_unix_socket: "/var/run/postgresql"
login_user: postgres
  become: false
```

This seems to work, but is it secure? If USER is `all` in
`pg_hba.conf`, can any POSIX account login?










Initial Postgres admin account setup using Ansible?

2024-12-31 Thread Nick


I'm trying to create an Ansible playbook that sets up and manages
Postgres on Debian 12.

I'm having issues with the default username/login structure, and could
use some help.

I'm installing the `postgresql` package via apt, and Debian creates a
`postgres` system account that has a locked password.

I can login to Postgres manually by first becoming root then running
`sudo -u postgres psql` as root. But when the Ansible user (which has
passwordless sudo) tries to run `sudo -u postgres psql`, I get:

"Sorry, user Ansible is not allowed to execute '/usr/bin/psql' as
postgres on example.com."

This is likely because the postgres POSIX account has a locked
password, so only root can become postgres. Other users with sudo
permissions can't become a locked account.

So I **could** unlock the `postgres` POSIX account, but I understand
that this account is locked for a reason.

The goal is to have Ansible manage the creation of databases and roles
in the Postgres database.

So I need to create an account in Postgres that Ansible can use as the
super user. I would like to do this in a way that doesn't require me to
manually login to the server, become root, become postgres as root,
then manually create an Ansible role.

What is the proper (secure) way to let the Ansible POSIX user manage
postgres? It seems there should be a fully automated way to bootstrap
an Ansible user for `postgres`.










Re: Initial Postgres admin account setup using Ansible?

2024-12-31 Thread Nick
On Tue, 2024-12-31 at 23:16 +0100, Andreas 'ads' Scherbaum wrote:
> 
> 
> 
> Can you please provide an example of the task(s) which fail?
> If you have passwordless "sudo" configured tor the user running
> Ansible,
> this works:
> 
> - name: Ping PostgreSQL
>   postgresql_ping:
>   db: postgres
>   login_unix_socket: "/var/run/postgresql"
>   login_user: postgres
>   become: yes
>   become_user: postgres
>  
> More examples and details:
> https://andreas.scherbaum.la/writings/Managing_PostgreSQL_with_Ansible_-_Percona_Live_2022.pdf
> 
> 

When trying this:

- name: Ping PostgreSQL
  postgresql_ping:
db: postgres
login_unix_socket: "/var/run/postgresql"
login_user: postgres
  become: yes
  become_user: postgres


I get:

Ping PostgreSQL...
  xxx.xxx.xxx.xxx failed | msg: Failed to set permissions on the
temporary files Ansible needs to create when becoming an unprivileged
user (rc: 1, err: chmod: invalid mode: ‘A+user:postgres:rx:allow’
Try 'chmod --help' for more information.
}). For information on working around this, see
https://docs.ansible.com/ansible-core/2.17/playbook_guide/playbooks_privilege_escalation.html#risks-of-becoming-an-unprivileged-user







Re: how to securely delete the storage freed when a table is dropped?

2018-04-13 Thread Nick Cleaton
On 13 April 2018 at 18:48, Jonathan Morgan
 wrote:
> For a system with information stored in a PostgreSQL 9.5 database, in which
> data stored in a table that is deleted must be securely deleted (like shred
> does to files), and where the system is persistent even though any
> particular table likely won't be (so can't just shred the disks at
> "completion"), I'm trying to figure out my options for securely deleting the
> underlying data files when a table is dropped.
>
> As background, I'm not a DBA, but I am an experienced implementor in many
> languages, contexts, and databases. I've looked online and haven't been able
> to find a way to ask PostgreSQL to do the equivalent of shredding its
> underlying files before releasing them to the OS when a table is DROPped. Is
> there a built-in way to ask PostgreSQL to do this? (I might just not have
> searched for the right thing - my apologies if I missed something)
>
> A partial answer we're looking at is shredding the underlying data files for
> a given relation and its indexes manually before dropping the tables, but
> this isn't so elegant, and I'm not sure it is getting all the information
> from the tables that we need to delete.
>
> We also are looking at strategies for shredding free space on our data disk
> - either running a utility to do that, or periodically replicating the data
> volume, swapping in the results of the copy, then shredding the entire
> volume that was the source so its "free" space is securely overwritten in
> the process.
>
> Are we missing something? Are there other options we haven't found? If we
> have to clean up manually, are there other places we need to go to shred
> data than the relation files for a given table, and all its related indexes,
> in the database's folder? Any help or advice will be greatly appreciated.

Can you encrypt the data in the application, above the DB level ? That
would be cleaner if you can.

If not, you'll have to worry about both the DB's data files themselves
and the WAL files in pg_xlog/ which hold copies of the recently
written data. Even if you securely scrub the deleted parts of the
filesystems after dropping the table, there could still be copies of
secret table data in WAL files that haven't yet been overwritten.

One way to scrub deleted files would be to use ZFS and have an extra
disk. When it's time to scrub, "zpool attach" the extra disk to your
zpool, which will cause ZFS to copy over only the files that haven't
been deleted, in the background. When that's finished you can detach
the original disk from the zpool and then do a low-level overwrite of
that entire disk. For extra security points use encrypted block
devices underneath ZFS, and instead of scrubbing the disk just destroy
the encryption key that you were using for it.



Allow Reg Expressions in Position function

2018-08-19 Thread Nick Dro
Position function allows only string as pattern parameter:
 
select position(sub-string in string)
 
If you are looking for more than one sub-string you can't do it with a single call to position.
More info: 
https://stackoverflow.com/questions/51925037/how-to-find-position-of-multiple-sub-string-in-postgresql
 
 
Required solution: make sub-string accept regular _expression_ just like the LIKE operator.



RE: Re: Allow Reg Expressions in Position function

2018-08-20 Thread Nick Dro

This incorrect.
SELECT position(substring('https://www.webexample.com/s/help?' FROM '/(s|b|t)/') IN 'https://www.webexample.com/s/help?');
 
Gives 5. It's wrong.
 
The answer to my question is avaliable in Stackoverflow link. but look that it's very complex.
It would be very easy if Reg Exp would be supported.
ב אוג׳ 20, 2018 10:31, Ken Tanzer כתב:
If I correctly understood what you wanted based on the SO description ("Something like: select position ('/s/' or '/b/' or '/t/' in URL)"),you could do that by combining SUBSTRING with position.  Something like:SELECT position(substring(URL FROM '/(s|b|t)/') IN URL);Cheers,KenOn Sun, Aug 19, 2018 at 11:27 PM Nick Dro <postgre...@walla.co.il> wrote:Position function allows only string as pattern parameter:

 
select position(sub-string in string)
 
If you are looking for more than one sub-string you can't do it with a single call to position.
More info: 
https://stackoverflow.com/questions/51925037/how-to-find-position-of-multiple-sub-string-in-postgresql
 
 
Required solution: make sub-string accept regular _expression_ just like the LIKE operator.
-- AGENCY Software  A Free Software data systemBy and for non-profitshttp://agency-software.org/https://demo.agency-software.org/clientken.tan...@agency-software.org(253) 245-3801Subscribe to the mailing list tolearn more about AGENCY orfollow the discussion.





RE: Re: Re: Allow Reg Expressions in Position function

2018-08-20 Thread Nick Dro
Hi,
My specific issue is alrady solved.
For the greater good I sent the email requesting to allow reg exp in the position functions.
Not sure if you will implement it... Just wanted to let you know that the limited capabilities of this function create overhead.ב אוג׳ 20, 2018 14:35, Geoff Winkless כתב:
On Mon, 20 Aug 2018 at 09:22, Nick Dro <postgre...@walla.co.il> wrote:

This incorrect.
SELECT position(substring('https://www.webexample.com/s/help?' FROM '/(s|b|t)/') IN 'https://www.webexample.com/s/help?');
 
Gives 5. It's wrong.On Mon, 20 Aug 2018 at 09:22, Nick Dro <postgre...@walla.co.il> wrote:This incorrect.SELECT position(substring('https://www.webexample.com/s/help?' FROM '/(s|b|t)/') IN 'https://www.webexample.com/s/help?'); Gives 5. It's wrong.For some reason, substring() returns the parenthesised subexpression rather than the top level..The comment in testregexsubstr does say that it does this, but it's not clear from the documentation at all, unless I'm missing where it says it.You can work around this by putting parentheses around the whole _expression_, because that way the first subexpression is the whole match.db=# SELECT position(substring('https://www.webexample.com/s/help?' FROM '(/(s|b|t)/)') IN 'https://www.webexample.com/s/help?'); position--       27Geoff





Unable to startup postgres: Could not read from file "pg_clog/00EC"

2020-02-05 Thread Nick Renders

Hello,

Yesterday, we experienced some issues with our Postgres installation 
(v9.6 running on macOS 10.12).
It seems that the machine was automatically rebooted for a yet unknown 
reason, and afterwards we were unable to start the Postgres service.


The postgres log shows the following:

2020-02-04 15:20:41 CET LOG:  database system was interrupted; last 
known up at 2020-02-04 15:18:34 CET
2020-02-04 15:20:43 CET LOG:  database system was not properly shut 
down; automatic recovery in progress
2020-02-04 15:20:44 CET LOG:  invalid record length at 14A/9E426DF8: 
wanted 24, got 0

2020-02-04 15:20:44 CET LOG:  redo is not required
2020-02-04 15:20:44 CET FATAL:  could not access status of transaction 
247890764
2020-02-04 15:20:44 CET DETAIL:  Could not read from file "pg_clog/00EC" 
at offset 106496: Undefined error: 0.
2020-02-04 15:20:44 CET LOG:  startup process (PID 403) exited with exit 
code 1
2020-02-04 15:20:44 CET LOG:  aborting startup due to startup process 
failure

2020-02-04 15:20:44 CET LOG:  database system is shut down


After some searching, I found someone who had had a similar issue and 
was able to resolve it by overwriting the file in pg_clog.

So I tried the following command:

dd if=/dev/zero of=[dbpath]/pg_clog/00EC bs=256k count=1

and now the service is running again.


But I am worried that there might still be some issues that we haven't 
noticed yet. I also have no idea what caused this error in the first 
place. It might have been the reboot, but maybe the reboot was a result 
of a Postgres issue.


Is there anything specific I should check in our postgres installation / 
database to make sure it is running ok now? Anyway to see what the 
consequences were of purging that one pg_clog file?


Best regards,

Nick Renders




Re: Unable to startup postgres: Could not read from file "pg_clog/00EC"

2020-02-06 Thread Nick Renders

Thank you for the feedback, Alvaro.

Unfortunately, the database is no longer "dumpable". We were able to do 
a pg_dump yesterday morning (12 hours after the crash + purging the 
pg_clog) but if we try one now, we get the following error:


	unexpected chunk number 1 (expected 0) for toast value 8282331 in 
pg_toast_38651


Looking at our data, there seem to be 6 tables that have corrupt 
records. Doing a SELECT * for one of those records, will return a 
similar error:


missing chunk number 0 for toast value 8288522 in pg_toast_5572299


What is the best way to go from here? Is tracking down these corrupt 
records and deleting them the best / only solution?
Is there a way to determine of there are issues with new data (after the 
crash)?


Any help and advice is very much appreciated.

Thanks,


Nick Renders


On 5 Feb 2020, at 12:51, Alvaro Herrera wrote:


On 2020-Feb-05, Nick Renders wrote:

Is there anything specific I should check in our postgres 
installation /

database to make sure it is running ok now? Anyway to see what the
consequences were of purging that one pg_clog file?


Losing pg_clog files is pretty bad, and should not happen; then again,
this might have been something else (ie. the file was maybe not lost).
That said, wrongly overwriting files is even worse.

By zeroing an existing pg_clog file, you marked a bunch of 
transactions

as aborted.  Your data is now probably inconsistent, if not downright
corrupt.  I would be looking for my most recent backup ...

If you're very lucky, your database might be pg_dumpable.  I would try
that, followed by restoring it in a separate clean instance and seeing
what happens.

--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services





Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-08 Thread Nick Renders

Hi,

We have just upgraded our Postgres 9.6 database to 12.1 (pg_dumpall -> 
pg_restore on a clean installation) and now we are having some issues 
with one of our tables.


When we do the following statement:

SELECT * FROM f_gsxws_schedule WHERE UPPER(gwsc_dossier) = 'TEST'

the Postgres service restarts.

It seems that using UPPER() in the WHERE clause is causing this. The 
same statement without UPPER() works just fine.


I have tried to emulate the issue with other tables, but 
f_gsxws_schedule seems to be the only one.
The table also has another character field that is indexed, and the same 
problem occurs there. Whenever we use UPPER() or LOWER() to do a 
case-insensitive search, the service reboots.


Looking at the table's definition, I don't see anything different with 
the other tables.


Here is what is logged:

2020-02-08 20:21:19.942 CET [83892] LOG:  server process (PID 85456) was 
terminated by signal 9: Killed: 9
2020-02-08 20:21:19.942 CET [83892] DETAIL:  Failed process was running: 
SELECT * FROM f_gsxws_schedule WHERE UPPER(gwsc_dossier) = 'TEST'
2020-02-08 20:21:19.942 CET [83892] LOG:  terminating any other active 
server processes
2020-02-08 20:21:19.943 CET [85364] WARNING:  terminating connection 
because of crash of another server process
2020-02-08 20:21:19.943 CET [85364] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-08 20:21:19.943 CET [85364] HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-08 20:21:19.943 CET [85360] WARNING:  terminating connection 
because of crash of another server process
2020-02-08 20:21:19.943 CET [85360] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-08 20:21:19.943 CET [85360] HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-08 20:21:19.943 CET [85269] WARNING:  terminating connection 
because of crash of another server process
2020-02-08 20:21:19.943 CET [85269] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-08 20:21:19.943 CET [85269] HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-08 20:21:19.946 CET [83892] LOG:  all server processes 
terminated; reinitializing
2020-02-08 20:21:19.988 CET [85686] LOG:  database system was 
interrupted; last known up at 2020-02-08 20:20:48 CET
2020-02-08 20:21:20.658 CET [85686] LOG:  database system was not 
properly shut down; automatic recovery in progress

2020-02-08 20:21:20.662 CET [85686] LOG:  redo starts at C/B99B45A0
2020-02-08 20:21:20.662 CET [85686] LOG:  invalid record length at 
C/B99B4688: wanted 24, got 0

2020-02-08 20:21:20.662 CET [85686] LOG:  redo done at C/B99B4650
2020-02-08 20:21:20.675 CET [83892] LOG:  database system is ready to 
accept connections



Has anyone noticed anything like this before? Any idea how to fix this?


Best regards,

Nick Renders




Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-11 Thread Nick Renders

Hi Thomas,

We are setting up a new test environment with 12.1.
Once it is running, I'll try out those commands and get back with the 
results.


Thanks,

Nick Renders


On 11 Feb 2020, at 2:51, Thomas Munro wrote:


On Mon, Feb 10, 2020 at 4:35 AM Marc  wrote:
We will keep the 12.1 in place so that we can run additional tests to 
assist to pin-point the issue.


Feel free to ask but allow us to recover from these hectic days ;-)


Here's how to get a stack so we can see what it was doing, assuming
you have the Apple developer tools installed:

1.  Find the PID of the backend you're connected to with SELECT
pg_backend_pid().
2.  "lldb -p PID" from a shell to attach to the process, then "cont"
to let it continue running.
3.  Run the query in that backend and wait for the SIGKILL.
4.  In the lldb session, type "bt".

It'll only make sense if your PostgreSQL build has debug symbols, but 
let's see.





PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-14 Thread Nick Renders



Hello,

We recently suffered a database crash which resulted in some corrupt 
records.


I thought I would write a little PL script that would loop through all 
the data and report any inconsistencies. However, I can't get it to work 
properly.


For instance, if I run the following statement in pgAdmin:

SELECT * FROM f_gsxws_transaction WHERE gwta_number = 762513

I get the following message:

	ERROR:  missing chunk number 0 for toast value 8289525 in 
pg_toast_5572299



So, as a test, I created a function that would just retrieve that one 
record:


DECLARE
rcontent f_gsxws_transaction%ROWTYPE;
BEGIN
		SELECT * INTO rcontent FROM f_gsxws_transaction where gwta_number = 
762513;

RETURN rcontent;
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'Record 762513 is corrupt';
END;


Now, when I run this function, I have noticed two things:

1) The function has no problem executing the SELECT statement. It is 
only when "rcontents" is returned, that the function fails. This is a 
problem, because the ultimate goal is to loop through all records and 
only return/alert something in case of an error.


2) The function never enters the EXCEPTION clause. Instead, when it hits 
the RETURN command, it breaks and shows the same message as in pgAdmin: 
missing chunk number 0 for toast value 8289525 in pg_toast_5572299.



Is it possible to check for these kind of errors with a PL script? Or is 
there perhaps a better way to check for corrupt records in a database?


Best regards,

Nick Renders




Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-17 Thread Nick Renders
The problem is that I don't know which column is corrupt. But I found a 
solution: by simply copying the record into another variable, the values 
are parsed and the TOAST errors are thrown.


In case anyone's interested, here's my code, based on an example from 
http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html



DO $f$
DECLARE
rContent1 record;
rContent2 record;
iCounter integer DEFAULT 1;
iValue integer;
pTableName varchar := 'f_gsxws_transaction';
pFieldName varchar := 'gwta_number';
BEGIN
	FOR iValue IN EXECUTE 'SELECT ' || pFieldName || ' FROM ' || 
pTableName::regclass || ' ORDER BY ' || pFieldName LOOP

BEGIN
			EXECUTE 'SELECT * FROM ' || pTableName::regclass || ' WHERE ' || 
pFieldName || ' = $1'

INTO rContent1
USING iValue;
rContent2 := rContent1;
EXCEPTION WHEN OTHERS THEN
			RAISE NOTICE 'data for %.% % is corrupt', pTableName, pFieldName, 
iValue;

END;
IF iCounter % 10 = 0 THEN
RAISE NOTICE '% % records checked', iCounter, 
pTableName;
END IF;
iCounter := iCounter+1;
END LOOP;
END;
$f$;


Cheers,

Nick


On 14 Feb 2020, at 16:14, Tom Lane wrote:


"Nick Renders"  writes:
I thought I would write a little PL script that would loop through 
all
the data and report any inconsistencies. However, I can't get it to 
work

properly.
...
1) The function has no problem executing the SELECT statement. It is
only when "rcontents" is returned, that the function fails. This is a
problem, because the ultimate goal is to loop through all records and
only return/alert something in case of an error.
2) The function never enters the EXCEPTION clause. Instead, when it 
hits
the RETURN command, it breaks and shows the same message as in 
pgAdmin:

missing chunk number 0 for toast value 8289525 in pg_toast_5572299.


I think what's happening there is that the function doesn't try to
dereference the value's TOAST pointer during SELECT INTO.  It just 
stores

that pointer into a variable, and only sometime later when the actual
content of the value is demanded, do you see the error raised.

The solution to that is to do something that uses the contents of the
busted column right away while still inside the EXCEPTION block, 
perhaps

along the lines of "select md5(mycolumn) into local_variable from..."

A close reading of

https://www.postgresql.org/docs/current/storage-toast.html

would probably help you understand what's happening here.

regards, tom lane





Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-17 Thread Nick Renders

Hi Jeremy,

This happend on PostgreSQL v9.6 which crashed 2 weeks ago.
Since then we have upgraded and restored our server, but my example is 
from the older, corrupt database.


Nick


On 15 Feb 2020, at 5:30, Jeremy Schneider wrote:


On Feb 14, 2020, at 04:39, Nick Renders  wrote:

I get the following message:

  ERROR:  missing chunk number 0 for toast value 8289525 in 
pg_toast_5572299


What version of PostgreSQL are you running? I’ve seen this a number 
of times the past couple years; curious if the lurking bug is still 
observed in latest versions.


-Jeremy

Sent from my TI-83





Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-24 Thread Nick Renders
We have set up a new test environment running PostgreSQL v12.2 on macOS 
10.14 and the issue is still there.
One thing I noticed, is that the returning columns do not affect the 
behaviour:


SELECT * FROM f_gsxws_schedule WHERE UPPER(gwsc_dossier) = 'TEST'

and
	SELECT gwsc_sequence FROM f_gsxws_schedule WHERE UPPER(gwsc_dossier) = 
'TEST'


both kill the postgres service.


I will try to free some time next week to install the Apple developer 
tools and further analyse the problem.


Best regards,

Nick


On 11 Feb 2020, at 12:32, Nick Renders wrote:


Hi Thomas,

We are setting up a new test environment with 12.1.
Once it is running, I'll try out those commands and get back with the 
results.


Thanks,

Nick Renders


On 11 Feb 2020, at 2:51, Thomas Munro wrote:


On Mon, Feb 10, 2020 at 4:35 AM Marc  wrote:
We will keep the 12.1 in place so that we can run additional tests 
to assist to pin-point the issue.


Feel free to ask but allow us to recover from these hectic days ;-)


Here's how to get a stack so we can see what it was doing, assuming
you have the Apple developer tools installed:

1.  Find the PID of the backend you're connected to with SELECT
pg_backend_pid().
2.  "lldb -p PID" from a shell to attach to the process, then "cont"
to let it continue running.
3.  Run the query in that backend and wait for the SIGKILL.
4.  In the lldb session, type "bt".

It'll only make sense if your PostgreSQL build has debug symbols, but 
let's see.





Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-25 Thread Nick Renders

Hi Tom,

1. we used the EDB installer.

2. turning JIT off did make the problem go away. So I guess this was 
causing the Postgres process to crash all along.


Thanks for the help,

Nick


On 24 Feb 2020, at 16:24, Tom Lane wrote:


"Nick Renders"  writes:
We have set up a new test environment running PostgreSQL v12.2 on 
macOS

10.14 and the issue is still there.


Some nearby threads prompt these two questions:

1. Are you using your own build, or is this from EDB's installer?

2. If the latter, does turning JIT off ("set jit = off") make the
problem go away?

There is as yet no "native" support for --with-llvm on macOS,
ie Apple themselves don't provide sufficient support for that.
EDB seem to have hacked up something that sort of works, but
only sort of.

regards, tom lane





Postgres on macOS 10

2020-03-03 Thread Nick Renders

Hi,

We just noticed something strange with our Postgres server.

We have Postgres 11 and 12 running on macOS 10.14 (Mojave), installed 
with the EDB installer. Whenever the machine is restarted, the Postgres 
service cannot be launched until a macOS user logs in. We have 
"automatic login" turned off in the system preferences, meaning that 
when the machine reboots, you get the macOS login screen.


In the system.log we see the following 2 lines over and over again, 
until a user has logged in:


	Mar  3 09:37:19 postgrestest com.apple.xpc.launchd[1] 
(com.edb.launchd.postgresql-12[319]): Service exited with abnormal code: 
2
	Mar  3 09:37:19 postgrestest com.apple.xpc.launchd[1] 
(com.edb.launchd.postgresql-12): Service only ran for 0 seconds. Pushing 
respawn out by 10 seconds.



It doesn't matter which macOS user logs in, doesn't have to be an 
Administrator. But once a user has logged in, the Postgres service is 
finally launched properly. Afterwards, the macOS user can log out again, 
and the service will continue running.


It doesn't seem to be limited to Mojave, either. I did a quick test on 
our older, decommissioned Postgres server (Postgres 9 on macOS 10.12) 
and the same issue seems to occur there.


Has anyone noticed something similar with macOS? Or is it just our 
setup?


Best regards,

Nick Renders




A limit clause can cause a poor index choice

2020-05-19 Thread Nick Cleaton
The attached script builds a 10G test table which demonstrates a
problem that we have in production with postgresql 12.3-1.pgdg18.04+1
on ubuntu linux. Indexes:

test_orders_o_date_idx btree(o_date)
test_orders_customer_id_o_date_idx btree(customer_id, o_date)

We query for the most recent orders for sets of customers, and
sometimes none of those customers have any orders and the results are
empty:

explain analyze select * from test_orders where
customer_id=any(ARRAY[9993,9997,9912,9954,9100,9101,9102,9234,9500,9512])
order by o_date desc;

QUERY PLAN

 Sort  (cost=24848.96..24870.67 rows=8686 width=1839) (actual
time=1.101..1.102 rows=0 loops=1)
   Sort Key: o_date DESC
   Sort Method: quicksort  Memory: 25kB
   ->  Index Scan using test_orders_customer_id_o_date_idx on
test_orders  (cost=0.43..17361.20 rows=8686 width=1839) (actual
time=1.047..1.047 rows=0 loops=1)
 Index Cond: (customer_id = ANY
('{9993,9997,9912,9954,9100,9101,9102,9234,9500,9512}'::integer[]))
 Planning Time: 3.821 ms
 Execution Time: 1.174 ms
(7 rows)

So far so good. But if we add a limit clause to the query then the
plan goes very wrong:

explain analyze select * from test_orders where
customer_id=any(ARRAY[9993,9997,9912,9954,9100,9101,9102,9234,9500,9512])
order by o_date desc limit 10;

  QUERY PLAN
-
 Limit  (cost=0.43..1660.98 rows=10 width=1839) (actual
time=4990.424..4990.424 rows=0 loops=1)
   ->  Index Scan Backward using test_orders_o_date_idx on test_orders
 (cost=0.43..1442355.43 rows=8686 width=1839) (actual
time=4990.423..4990.423 rows=0 loops=1)
 Filter: (customer_id = ANY
('{9993,9997,9912,9954,9100,9101,9102,9234,9500,9512}'::integer[]))
 Rows Removed by Filter: 500
 Planning Time: 0.063 ms
 Execution Time: 4990.435 ms


Is there something we can adjust to get it to prefer
test_orders_customer_id_o_date_idx even when there's a limit clause ?
#!/usr/bin/python3

import random
import datetime

secs_in_day = 24*60*60

longstr = """iufdpoaiusoto3u5034534i5j345k345lku09s80s9dfjwer.,newrwwerwerwerlwerjlwejrlkewjr""" * 10

print("""

drop table if exists test_orders;
drop sequence if exists test_orders_id_seq;

CREATE SEQUENCE test_orders_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;

CREATE TABLE test_orders (
id integer DEFAULT nextval('test_orders_id_seq'::regclass) NOT NULL,
o_date timestamp with time zone NOT NULL,
customer_id integer,
str1 text,
num1 integer,
long1 text,
long2 text,
long3 text,
long4 text
);

COPY test_orders(o_date, customer_id, str1, num1, long1, long2, long3, long4) FROM stdin;""")

for day in range(5000):
orders = [(secs_in_day * day + random.randrange(secs_in_day), customer) for customer in range(day, day+1000)]
for o_date, customer_id in sorted(orders):
print(datetime.datetime.fromtimestamp(1234234234 + o_date).isoformat(),
customer_id,
"blah",
random.randrange(100),
longstr,
longstr,
longstr,
longstr,
sep="\t")

print("""\\.

create index test_orders_o_date_idx on test_orders using btree(o_date);
create index test_orders_customer_id_o_date_idx on test_orders using btree(customer_id, o_date);

analyze test_orders;
""")


Re: A limit clause can cause a poor index choice

2020-05-20 Thread Nick Cleaton
On Tue, 19 May 2020 at 21:56, Mohamed Wael Khobalatte
 wrote:

> I believe a second ordering, by id desc, will get your query to use the right 
> index, and shouldn't be functionally different from what you would expect.

Thanks, that works nicely on our production table, even with much
larger sets of customer_id values.

> What happens when you raise the limit? Say to a 1000?

A limit of 1000 makes it choose the fast plan. A limit of 100 causes
it to choose the fast plan if I raise the stats target on that column
to 250 or above, otherwise not.




Re: A limit clause can cause a poor index choice

2020-05-20 Thread Nick Cleaton
On Tue, 19 May 2020 at 22:15, Michael Lewis  wrote:

> Increase default_statistics_target, at least on that column, and see if
you get a much much better plan. I don't know where I got this query from
online, but here ya go. I'd be curious how frac_MCV in this changes when
default_statistics_target is more like 250 or 500 and the table is analyzed
again to reflect that change.

It chooses the fast plan for a limit of 10 if the stats target is
approaching the number of distinct customer_id values, which is 6000 for
this test table:

 stats |  frac_mcv   | n_distinct | n_mcv | n_hist | correlation | l10 |
l100 | l1000
---+-++---++-+-+--+---
-1 | 0.01566 |   5728 |34 |101 |  0.98172975 | f   | f
   | t
   150 | 0.01505 |   5821 |38 |151 |   0.9817175 | f   | f
   | t
   250 |  0.04347998 |   5867 |   134 |251 |  0.98155195 | f   | t
   | t
   500 |  0.12606017 |   5932 |   483 |501 |  0.98155344 | f   | t
   | t
   750 |  0.18231618 |   5949 |   750 |751 |  0.98166454 | f   | t
   | t
  1000 |   0.2329197 |   5971 |  1000 |   1001 |   0.9816691 | f   | t
   | t
  1500 |   0.3312785 |   5982 |  1500 |   1501 |0.981609 | f   | t
   | t
  3000 |   0.6179379 |   5989 |  3000 |   2989 |0.981612 | f   | t
   | t
  4000 |   0.8033856 |   5994 |  4000 |   1994 |   0.9816348 | f   | t
   | t
  4500 |   0.8881603 |   5994 |  4500 |   1494 |  0.98160636 | f   | t
   | t
  4800 |   0.9281193 |   5993 |  4800 |   1193 |   0.9816273 | f   | t
   | t
  4900 |   0.9396781 |   5994 |  4900 |   1094 |   0.9816546 | f   | t
   | t
  5000 |   0.9500147 |   5993 |  5000 |993 |   0.9816481 | t   | t
   | t
  6000 |0.999714 |   5996 |  5923 | 73 |  0.98162216 | t   | t
   | t
 1 |  0.5905 |   5998 |  5970 | 28 |  0.98164326 | t   | t
   | t


Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Nick Aldwin
Hi folks,

Did something change recently with what versions of libpq-dev are published
to buster-pgdg?  We have a dockerfile based on postgres:12.2 (which is
based on buster-slim) that installs "libpq-dev=$PG_MAJOR.*"  and it just
recently (this week) started failing.  running a brand new postgres
container and checking the apt cache, I only see versions for 11 (coming
from debian sources) and 13 (coming from buster-pgdg main).  I have
verified that the sources list includes both main and 12 -- was it removed
from 12?

$ docker run --rm -it --entrypoint bash
postgres:12.2root@fb7c949f82a0:/# apt update && apt-cache policy
libpq-devGet:1 http://deb.debian.org/debian buster InRelease [121 kB]
Get:2 http://security.debian.org/debian-security buster/updates
InRelease [65.4 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://apt.postgresql.org/pub/repos/apt buster-pgdg InRelease [103 kB]
Get:5 http://security.debian.org/debian-security buster/updates/main
amd64 Packages [233 kB]
Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7,906 kB]
Get:7 http://deb.debian.org/debian buster-updates/main amd64 Packages [7,868 B]
Get:8 http://apt.postgresql.org/pub/repos/apt buster-pgdg/12 amd64
Packages [861 B]
Get:9 http://apt.postgresql.org/pub/repos/apt buster-pgdg/main amd64
Packages [203 kB]
Fetched 8,694 kB in 2s (3,782 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
19 packages can be upgraded. Run 'apt list --upgradable' to see them.
libpq-dev:
  Installed: (none)
  Candidate: 13.0-1.pgdg100+1
  Version table:
 13.0-1.pgdg100+1 500
500 http://apt.postgresql.org/pub/repos/apt buster-pgdg/main
amd64 Packages
 11.9-0+deb10u1 500
500 http://deb.debian.org/debian buster/main amd64 Packages
 11.7-0+deb10u1 500
500 http://security.debian.org/debian-security
buster/updates/main amd64 Packagesroot@fb7c949f82a0:/# cat
/etc/apt/sources.list.d/pgdg.listdeb
http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main 12


Thanks,
Nick


Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Nick Aldwin
Hi Adrian,

The FAQ you linked to says the following:

> If you really want to use a different version, the packages are available
in separate archive components named after the PostgreSQL major version.
Append that version after "main" in your sources.list. For example, if you
wanted 9.0's libpq5 on Debian Squeeze, use this: deb
http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main *9.0*

In the postgres dockerfile, it _is_ appending the version 12 to the sources
list:

root@fb7c949f82a0:/# cat /etc/apt/sources.list.d/pgdg.list

deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main 12


However I am still not seeing that version show up.  If I remove 'main',
leaving just 12, no versions show up as coming from the postgres repo at
all.  Am I missing something else here?

-Nick


On Wed, Oct 7, 2020 at 2:56 PM Adrian Klaver 
wrote:

> On 10/7/20 11:01 AM, Nick Aldwin wrote:
> > Hi folks,
> >
> > Did something change recently with what versions of libpq-dev are
> > published to buster-pgdg?  We have a dockerfile based on
> > |postgres:12.2| (which is based on |buster-slim|) that installs
> > |"libpq-dev=$PG_MAJOR.*"|  and it just recently (this week) started
> > failing.  running a brand new postgres container and checking the apt
> > cache, I only see versions for 11 (coming from debian sources) and 13
> > (coming from buster-pgdg main).  I have verified that the sources list
> > includes both |main| and |12| -- was it removed from |12|?
>
> See this FAQ item:
>
>
> https://wiki.postgresql.org/wiki/Apt/FAQ#I_want_libpq5_for_version_X.2C_but_there_is_only_version_Y_in_the_repository
>
> for how it works in the PGDG repos.
>
> On the Debian side V11 is the supported version for Buster.
>
> >
> > $ docker run --rm -it --entrypoint bash postgres:12.2root@fb7c949f82a0:/#
> apt update && apt-cache policy libpq-devGet:1
> http://deb.debian.org/debian
> buster InRelease [121 kB]
> > Get:2
> http://security.debian.org/debian-security
> buster/updates InRelease [65.4 kB]
> > Get:3
> http://deb.debian.org/debian
> buster-updates InRelease [51.9 kB]
> > Get:4
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg InRelease [103 kB]
> > Get:5
> http://security.debian.org/debian-security
> buster/updates/main amd64 Packages [233 kB]
> > Get:6
> http://deb.debian.org/debian
> buster/main amd64 Packages [7,906 kB]
> > Get:7
> http://deb.debian.org/debian
> buster-updates/main amd64 Packages [7,868 B]
> > Get:8
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg/12 amd64 Packages [861 B]
> > Get:9
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg/main amd64 Packages [203 kB]
> > Fetched 8,694 kB in 2s (3,782 kB/s)
> > Reading package lists... Done
> > Building dependency tree
> > Reading state information... Done
> > 19 packages can be upgraded. Run 'apt list --upgradable' to see them.
> > libpq-dev:
> >Installed: (none)
> >Candidate: 13.0-1.pgdg100+1
> >Version table:
> >   13.0-1.pgdg100+1 500
> >  500
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg/main amd64 Packages
> >   11.9-0+deb10u1 500
> >      500
> http://deb.debian.org/debian
> buster/main amd64 Packages
> >   11.7-0+deb10u1 500
> >  500
> http://security.debian.org/debian-security
> buster/updates/main amd64 Packagesroot@fb7c949f82a0:/# cat
> /etc/apt/sources.list.d/pgdg.listdebhttp://
> apt.postgresql.org/pub/repos/apt/  buster-pgdg main 12
> >
> >
> > Thanks,
> > Nick
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>


Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Nick Aldwin
Thanks for the reply.  Should I post to the separate hackers list, or wait
for someone to chime in here?

FWIW, I am able to access older v12 libpq-dev by using the archive apt
list: https://apt-archive.postgresql.org/ -- so we will do that going
forward until this is resolved.

-Nick

On Wed, Oct 7, 2020 at 3:23 PM Adrian Klaver 
wrote:

> On 10/7/20 12:02 PM, Nick Aldwin wrote:
> > Hi Adrian,
> >
> > The FAQ you linked to says the following:
> >
> >  > If you really want to use a different version, the packages are
> > available in separate archive components named after the PostgreSQL
> > major version. Append that version after "main" in your sources.list.
> > For example, if you wanted 9.0's libpq5 on Debian Squeeze, use this: deb
> >
> http://apt.postgresql.org/pub/repos/apt
> squeeze-pgdg main *9.0*
> >
> > In the postgres dockerfile, it _is_ appending the version 12 to the
> > sources list:
> >
> > root@fb7c949f82a0:/# cat /etc/apt/sources.list.d/pgdg.list
> >
> > debhttp://apt.postgresql.org/pub/repos/apt/  buster-pgdg main 12
> >
> >
> > However I am still not seeing that version show up.  If I remove 'main',
> > leaving just 12, no versions show up as coming from the postgres repo at
> > all.  Am I missing something else here?
>
> Yeah not working for me either. Probably means one of the packagers will
> need to chime in.
> >
> > -Nick
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


concurrent re-partitioning of declarative partitioned tables

2020-11-30 Thread Nick Cleaton
I want to set up a large table on postgresql 12.4, using declarative
partitioning to partition by record creation date. I'd like to have recent
records in small partitions but old records in a few larger partitions, so
I want merges. The merges should be concurrent, in the sense that they lock
out readers or writers only for very short intervals if at all.

I'm looking at adding an extra boolean column and partitioning on that at
the top level, with two parallel date-based partition trees underneath, so
that I can effectively have overlapping date partitions:


create table mytable (
  record_date timestamp with time zone not null,
  _partition_channel boolean,
  ...
)
partition by list (_partition_channel);

create table mytable_chan_null
  partition of mytable for values in (null)
  partition by range (record_date);

create table mytable_chan_true
  partition of mytable for values in (true)
  partition by range (record_date);

create table mytable_day_20200101
  partition of mytable_chan_null
  for values from ('2020-01-01') to ('2020-01-02');

...

create table mytable_day_20200107
  partition of mytable_chan_null
  for values from ('2020-01-07') to ('2020-01-08');


Then to merge several day-partitions into a week-partition:

create table mytable_week_20200101
  partition of mytable_chan_true
  for values from ('2020-01-01') to ('2020-01-08');

... and migrate rows in batches by updating _partition_channel to true,
then finally drop the empty day partitions.

Since record_date is an insertion timestamp, I don't mind that after this
merge updating the record_date of a merged row could fail due to a missing
partition. Likewise there's no need for new rows to be inserted with
record_date values in previously merged ranges.

Questions:

Are there any hidden pitfalls with this approach ?

Have I missed a simpler way ?

Is there a project out there that will manage this for me ?


Re: concurrent re-partitioning of declarative partitioned tables

2020-12-02 Thread Nick Cleaton
On Wed, 2 Dec 2020 at 16:07, David G. Johnston 
wrote:

> On Mon, Nov 30, 2020 at 8:36 AM Nick Cleaton  wrote:
>
>> I want to set up a large table on postgresql 12.4, using declarative
>> partitioning to partition by record creation date. I'd like to have recent
>> records in small partitions but old records in a few larger partitions, so
>> I want merges. The merges should be concurrent, in the sense that they lock
>> out readers or writers only for very short intervals if at all.
>>
>
> Once a date has passed is the table for that date effectively read-only?
>

No, old records get updated from time to time, although updates are much
less common than for recent records.


Re: concurrent re-partitioning of declarative partitioned tables

2020-12-02 Thread Nick Cleaton
On Wed, 2 Dec 2020 at 15:59, Michael Lewis  wrote:

> You can not have overlapping partitions that are both attached.
>

Not directly, no. That's why I'm considering the _partition_channel hack.

Why do you want to merge partitions that you are "done with" instead of
> just leaving them partitioned by day?
>

I have some random access index lookups on columns not in the partition
key, where values are unique over the entire table so at most one partition
is going to return a row. A lookup that touches 4 or 5 pages in each of 100
partition indexes is more expensive than one that touches 6 or 7 pages in
each of 10 larger partition indexes.

Why are you partitioning at all? Are you confident that you need partitions
> for performance & that the trade-offs are worth the cost, or are you
> needing to detach/drop old data quickly to adhere to a retention policy?
>

I do want cheap drops of old data, but also many queries have indexable
conditions on non-key columns and also only want records from the most
recent N days, so partition pruning is useful there with small partitions
for recent records.


Re: Is this a reasonable use for advisory locks?

2022-04-14 Thread Nick Cleaton
On Thu, 14 Apr 2022 at 10:47, Steve Baldwin  wrote:

> Ok, so you want to allow _other_ updates to a customer while this process
> is happening? In that case, advisory locks will probably work. The only
> consideration is that the 'id' is a bigint. If your customer id maps to
> that, great. If not (for example we use UUID's), you will need some way to
> convert that id to a bigint.
>

Alternatively, create a new table that records the start timestamp of the
most recent run of your code block for each customer, and update that as
the first action in your transaction. Then row locks on that table will
protect you from concurrent runs.


Re: Handling glibc v2.28 breaking changes

2022-04-25 Thread Nick Cleaton
On Mon, 25 Apr 2022 at 12:45, Laurenz Albe  wrote:

>
> You could consider upgrade in several steps:
>
> - pg_upgrade to v14 on the current operating system
> - use replication, than switchover to move to a current operating system
> on a different
>   machine
> - REINDEX CONCURRENTLY all indexes on string expressions
>
> You could get data corruption and bad query results between the second and
> the third steps,
> so keep that interval short.
>

We did something like this, with the addition of a step where we used a
new-OS replica to run amcheck's bt_index_check() over all of the btree
indexes to find those actually corrupted by the libc upgrade in practice
with our data. It was a small fraction of them, and we were able to fit an
offline reindex of those btrees and all texty non-btree indexes into an
acceptable downtime window, with REINDEX CONCURRENTLY of everything else as
a lower priority after the upgrade.


Re: AW: [Extern] Re: consistent postgresql snapshot

2022-05-12 Thread Nick Cleaton
On Thu, 12 May 2022 at 14:48, Tom Lane  wrote:

> "Zwettler Markus (OIZ)"  writes:
> > I don't want to do use the normal backup algorithm where pg_start_backup
> + pg_stop_backup will fix any fractured block and I am required to have all
> archived logfiles, therefore.
> > I want to produce an atomic consistent disk snapshot.
>
> [ shrug... ]  You can't have that.  [snip]
>
> The only way you could get a consistent on-disk image is to shut
> the server down (being sure to do a clean not "immediate" shutdown)
> and then take the snapshot.
>

I think you could work around that by taking a dirty snapshot, making a
writable filesystem from it, waiting until you've archived enough WAL to
get that to a consistent state, and then firing up a temporary postmaster
on that filesystem to go through recovery and shut down cleanly.


Re: User Name Maps seem broken in 11.1 on CentOS 7

2019-01-29 Thread Nick B
Hey!

I think I've figured out what was your problem.

You have created a mapping to allow OS user "foobar" auth as pg role
"postgres".

What happens though (and error message actually indicates that) is you are
trying to authenticate as pg role "foobar". This is probably due to you
executing `psql` in terminal without specifying an actual user name.
The way to do this properly would be to execute `psql -U postgres`.

Unfortunately, you've left before I was able to tell you this.

Kind regards,
Nick.


Unrecognized Node Type Warning

2023-05-18 Thread Arora, Nick
We are using PostgreSQL 12.13. We are noticing that queries that attempt to 
retrieve an element of an array by specifying its position cause a warning to 
be emitted: "WARNING:  unrecognized node type: 110".

Would appreciate your help diagnosing the issue and identifying steps to 
resolve.

Queries that reproduce the issue:

SELECT ('{0}'::int2[])[0];
WARNING:  unrecognized node type: 110
int2
--

(1 row)


SELECT ('0'::int2vector)[0];
WARNING:  unrecognized node type: 110
int2vector

  0
(1 row)

SELECT (indkey::int2[])[0] FROM pg_index limit 1;
WARNING:  unrecognized node type: 110
indkey

  1
(1 row)

SELECT scores[1], scores[2], scores[3], scores[4] FROM 
(select('{10,12,14,16}'::int[]) AS scores) AS round;
WARNING:  unrecognized node type: 110
WARNING:  unrecognized node type: 110
WARNING:  unrecognized node type: 110
WARNING:  unrecognized node type: 110
scores | scores | scores | scores
+++
 10 | 12 | 14 | 16
(1 row)

This email and any attachments thereto may contain private, confidential, and 
privileged material for the sole use of the intended recipient. 
Any review, copying, or distribution of this email (or any attachments thereto) 
by others is strictly prohibited. If you are not the intended recipient, 
please contact the sender immediately and permanently delete the original and 
any copies of this email and any attachments thereto.


Re: Unrecognized Node Type Warning

2023-05-18 Thread Arora, Nick
Hello Tom,

Thanks for the information.  Here are the extensions we are using:

uuid-ossp
pgcrypto
citext
btree_gin

The warnings did start emitting shortly after the installation of btree_gin, so 
it seems somewhat suspect


From: Tom Lane 
Date: Thursday, May 18, 2023 at 11:30 AM
To: Arora, Nick 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Unrecognized Node Type Warning
EXT - t...@sss.pgh.pa.us

"Arora, Nick"  writes:
> We are using PostgreSQL 12.13. We are noticing that queries that attempt to 
> retrieve an element of an array by specifying its position cause a warning to 
> be emitted: "WARNING:  unrecognized node type: 110".

I don't see that here, so I'm guessing it's coming from some extension.
What extensions do you have loaded?

Node type 110 would be T_SubscriptingRef in v12, which is a type name
that didn't exist in earlier versions (it used to be called ArrayRef),
so it's not very hard to believe that some extension missed out
support for that type.  But the only core-PG suspect is
pg_stat_statements, and I can see that it does know that node type.

regards, tom lane

This email and any attachments thereto may contain private, confidential, and 
privileged material for the sole use of the intended recipient. 
Any review, copying, or distribution of this email (or any attachments thereto) 
by others is strictly prohibited. If you are not the intended recipient, 
please contact the sender immediately and permanently delete the original and 
any copies of this email and any attachments thereto.


Re: Unrecognized Node Type Warning

2023-05-18 Thread Arora, Nick
To provide more complete information:

Here is the name and version of each extension we have installed:
azure   (1.0)
btree_gin (1.3)
citext   (1.6)
pgcrypto  (1.3)
plpgsql  (1.0)
uuid-ossp (1.1)

This email and any attachments thereto may contain private, confidential, and 
privileged material for the sole use of the intended recipient. 
Any review, copying, or distribution of this email (or any attachments thereto) 
by others is strictly prohibited. If you are not the intended recipient, 
please contact the sender immediately and permanently delete the original and 
any copies of this email and any attachments thereto.


archive_command debugging

2023-08-23 Thread Nick Renders
Hello,

I was wondering if anyone had any good tips for "debugging" the archive_command 
in the postgresql.conf.

I want to see what certain settings are when the command is run, like $PATH and 
id, but I don't want it to be successful either, so that the WAL files remain 
untouched.

Any tips?

Thanks,

Nick Renders




Re: Start service

2023-09-22 Thread Nick Ivanov
I'd check if there is already "postmaster.pid" in C:\Program
Files\PostgreSQL\15\data, left over from a previous abend.

On Fri, Sep 22, 2023 at 2:40 PM Brad White  wrote:

> I'm trying to start a v15 service on a Windows 2012 R2 server where it
> hasn't been used for a while.
> The service is set to run as pgUser.
> ...
> If I sub 'start' for 'runservice' and try to start it from CLI, I get
>  C:\Users\administrator>"C:\Program
> Files\PostgreSQL\15\bin\pg_ctl.exe" start
>  -N "postgresql-x64-15" -D "C:\Program Files\PostgreSQL\15\data" -w
>  waiting for server to start2023-09-22 12:48:05.438 CDT [4796]
> FATAL:  could
> not create lock file "postmaster.pid": Permission denied
>

-- 
Nick Ivanov
Solution Architect
www.enterprisedb.com


Re: Gradual migration from integer to bigint?

2023-10-05 Thread Nick Cleaton
On Sat, 30 Sept 2023, 23:37 Tom Lane,  wrote:

>
> I think what you're asking for is a scheme whereby some rows in a
> table have datatype X in a particular column while other rows in
> the very same physical table have datatype Y in the same column.
>

An alternative for NOT NULL columns would be to use a new attnum for the
bigint version of the id, but add a column to pg_attribute allowing linking
the new id col to the dropped old id col, to avoid the table rewrite.

Global read code change needed: on finding a NULL in a NOT NULL column,
check for a link to a dropped old col and use that value instead if found.
The check could be almost free in the normal case if there's already a
check for unexpected NULL or tuple too short.

Then a metadata-only operation can create the new id col and drop and
rename and link the old id col, and fix up fkeys etc for the attnum change.

Indexes are an issue. Require the in-advance creation of indexes like
btree(id::bigint) mirroring every index involving id maybe ? Those could
then be swapped in as part of the same metadata operation.


New addition to the merge sql standard

2023-11-16 Thread Nick DeCoursin
Dear Postgres Administrators,

There was a great article of `merge` by Lukas Fittl here:
https://pganalyze.com/blog/5mins-postgres-15-merge-vs-insert-on-conflict

In his article, he highlights one of the severe disadvantages to merge:

The comment that he essentially made is that the downside of MERGE's
> handling of concurrency is that when you concurrently INSERT, so at the
> same time as you're executing the MERGE statement, there is another INSERT
> going on, then MERGE might not notice that. MERGE would go into its INSERT
> logic, and then it would get a unique violation.


This means that any individual row insert during the insert logic of the
merge query can cause a unique violation and tank the entire merge query.

I explained this in more detail here:
https://stackoverflow.com/questions/77479975/postgres-merge-silently-ignore-unique-constraint-violation

In my opinion, it would be better for merge to offer the functionality to
simply ignore the rows that cause unique violation exceptions instead of
tanking the whole query.

Thank you,
Nick


Re: New addition to the merge sql standard

2023-11-20 Thread Nick DeCoursin
 I don't have any postgres development
experience (actually we did go into the postgres source for a database
project in college haha).

However, it might be pertinent for me to reference this under the READ
COMMITTED semantics:

MERGE allows the user to specify various combinations of INSERT, UPDATE and
> DELETE subcommands. A MERGE command with both INSERT and UPDATE subcommands
> looks similar to INSERT with an ON CONFLICT DO UPDATE clause but does not
> guarantee that either INSERT or UPDATE will occur. If MERGE attempts an
> UPDATE or DELETE and the row is concurrently updated but the join
> condition still passes for the current target and the current source tuple,
> then MERGE will behave the same as the UPDATE or DELETE commands and
> perform its action on the updated version of the row. *However, because *
> *MERGE** can specify several actions and they can be conditional, the
> conditions for each action are re-evaluated on the updated version of the
> row, starting from the first action, even if the action that had originally
> matched appears later in the list of actions.* On the other hand, if the
> row is concurrently updated or deleted so that the join condition fails,
> then MERGE will evaluate the condition's NOT MATCHED actions next, and
> execute the first one that succeeds. If MERGE attempts an INSERT and a
> unique index is present and a duplicate row is concurrently inserted, then
> a uniqueness violation error is raised; MERGE does not attempt to avoid
> such errors by restarting evaluation of MATCHED conditions.
>

With this in mind, the `merge` statement doesn't block on concurrent
inserts, nor is that necessary. The merge semantics imply that there is no
blocking/waiting. Deriving from this or in tandem with this, the insert
within merge doesn't need to do any blocking or waiting either, only when
it actually performs the job of committing the insert, if this fails, then
perform the `on conflict do nothing`. Therefore, due to the original merge
semantics, merge insert doesn't need to follow the same requirements as
normal `insert`, and it doesn't need to *wait* - it's a best effort thing.

In my opinion, `merge` is meant for batch operations of large data, and
that's the best way to think about it. It's not meant for perfectly
serializable data. It's meant for moving huge datasets efficiently in a
best effort means.

Cheers,
Nick

On Thu, Nov 16, 2023 at 6:13 PM Alvaro Herrera 
wrote:

> On 2023-Nov-16, Nick DeCoursin wrote:
>
> > In my opinion, it would be better for merge to offer the functionality to
> > simply ignore the rows that cause unique violation exceptions instead of
> > tanking the whole query.
>
> "ignore" may not be what you want, though.  Perhaps the fact that insert
> (coming from the NOT MATCHED clause) fails (== conflicts with a tuple
> concurrently inserted in an unique or exclusion constraint) should
> transform the row operation into a MATCHED case, so it'd fire the other
> clauses in the overall MERGE operation.  Then you could add a WHEN
> MATCHED DO NOTHING case which does the ignoring that you want; or just
> let them be handled by WHEN MATCHED UPDATE or whatever.  But you may
> need some way to distinguish rows that appeared concurrently from rows
> that were there all along.
>
> In regards to the SQL standard, I hope what you're saying is merely not
> documented by them.  If it indeed isn't, it may be possible to get them
> to accept some new behavior, and then I'm sure we'd consider
> implementing it.  If your suggestion goes against what they already
> have, I'm afraid you'd be doomed.  So the next question is, how do other
> implementations handle this case you're talking about?  SQL Server, DB2
> and Oracle being the relevant ones.
>
> Assuming the idea is good and there are no conflicts, then maybe it's
> just lack of round tuits.
>
> Happen to have some?
>
> I vaguely recall thinking about this, and noticing that implementing
> something of this sort would require messing around with the ExecInsert
> interface.  It'd probably require splitting it in pieces, similar to how
> ExecUpdate was split.
>
> There are some comments in the code about possible "live-locks" where
> merge would be eternally confused between inserting a new row which it
> then wants to delete; or something like that.  For sure we would need to
> understand the concurrent behavior of this new feature very clearly.
>
>
> An interesting point is that our inserts *wait* to see whether the
> concurrent insertion commits or aborts, when a unique constraint is
> involved.  I'm not sure you want to have MERGE blocking on concurrent
> inserts.  This is all assuming READ COMMITTED semantics; on REPEATABLE
> READ or higher, I think you're just screwed, because of course MERGE is
> not going to get a snapshot that sees the rows inserted by transactions
> that started after.
>
> You'd need to explore all this very carefully.
>
> --
> Álvaro HerreraBreisgau, Deutschland  —
> https://www.EnterpriseDB.com/
>


could not open file "global/pg_filenode.map": Operation not permitted

2024-02-26 Thread Nick Renders
Hello,

We have a Postgres server that intermittently logs the following:

2024-02-26 10:29:41.580 CET [63962] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:11.147 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:30:11.149 CET [63975] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:35.941 CET [63986] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:41.546 CET [63991] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:44.398 CET [63994] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:31:11.149 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:31:11.151 CET [64008] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:31:41.546 CET [64023] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:32:11.150 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:32:11.153 CET [64035] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:32:41.547 CET [64050] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:33:11.151 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:33:11.153 CET [64062] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:33:41.548 CET [64087] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted


This has happened 3 times over the last 2 weeks now, without any indication 
what caused it.
The privileges of those 2 files are all in order.
When this happens, the server is no longer accessible, and we need to restart 
the service (pg_ctl restart).
Once restarted, Popstgres runs fine again for a couple of days.

We are running PostgreSQL 16.2 on macOS 14.3.1.

Any idea what might be causing this issue, or how to resolve it?


Best regards,

Nick Renders




Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-03-11 Thread Nick Renders
Thank you for your reply Laurenz.
I don't think it is related to any third party security software. We have 
several other machines with a similar setup, but this is the only server that 
has this issue.

The one thing different about this machine however, is that it runs 2 instances 
of Postgres:
- cluster A on port 165
- cluster B on port 164
Cluster A is actually a backup from another Postgres server that is restored on 
a daily basis via Barman. This means that we login remotely from the Barman 
server over SSH, stop cluster A's service (port 165), clear the Data folder, 
restore the latest back into the Data folder, and start up the service again.
Cluster B's Data and service (port 164) remain untouched during all this time. 
This is the cluster that experiences the intermittent "operation not permitted" 
issue.

Over the past 2 weeks, I have suspended our restore script and the issue did 
not occur.
I have just performed another restore on cluster A and now cluster B is 
throwing errors in the log again.

Any idea why this is happening? It does not occur with every restore, but it 
seems to be related anyway.

Thanks,

Nick Renders


On 26 Feb 2024, at 16:29, Laurenz Albe wrote:

> On Mon, 2024-02-26 at 15:14 +0100, Nick Renders wrote:
>> We have a Postgres server that intermittently logs the following:
>>
>> 2024-02-26 10:29:41.580 CET [63962] FATAL:  could not open file 
>> "global/pg_filenode.map": Operation not permitted
>> 2024-02-26 10:30:11.147 CET [90610] LOG:  could not open file 
>> "postmaster.pid": Operation not permitted; continuing anyway
>>
>> This has happened 3 times over the last 2 weeks now, without any indication 
>> what caused it.
>> The privileges of those 2 files are all in order.
>> When this happens, the server is no longer accessible, and we need to 
>> restart the service (pg_ctl restart).
>> Once restarted, Popstgres runs fine again for a couple of days.
>>
>> We are running PostgreSQL 16.2 on macOS 14.3.1.
>
> Perhaps that is some kind of virus checker or something else that locks files.
>
> Yours,
> Laurenz Albe




Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-03-12 Thread Nick Renders
On 11 Mar 2024, at 16:04, Adrian Klaver wrote:

> On 3/11/24 03:11, Nick Renders wrote:
>> Thank you for your reply Laurenz.
>> I don't think it is related to any third party security software. We have 
>> several other machines with a similar setup, but this is the only server 
>> that has this issue.
>>
>> The one thing different about this machine however, is that it runs 2 
>> instances of Postgres:
>> - cluster A on port 165
>> - cluster B on port 164
>> Cluster A is actually a backup from another Postgres server that is restored 
>> on a daily basis via Barman. This means that we login remotely from the 
>> Barman server over SSH, stop cluster A's service (port 165), clear the Data 
>> folder, restore the latest back into the Data folder, and start up the 
>> service again.
>> Cluster B's Data and service (port 164) remain untouched during all this 
>> time. This is the cluster that experiences the intermittent "operation not 
>> permitted" issue.
>>
>> Over the past 2 weeks, I have suspended our restore script and the issue did 
>> not occur.
>> I have just performed another restore on cluster A and now cluster B is 
>> throwing errors in the log again.
>
> Since it seems to be the trigger, what are the contents of the restore script?
>
>>
>> Any idea why this is happening? It does not occur with every restore, but it 
>> seems to be related anyway.
>>
>> Thanks,
>>
>> Nick Renders
>>
>
>
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com



> ...how are A and B connected?

The 2 cluster are not connected. They run on the same macOS 14 machine with a 
single Postgres installation ( /Library/PostgreSQL/16/ ) and their respective 
Data folders are located on the same volume ( 
/Volumes/Postgres_Data/PostgreSQL/16/data and 
/Volumes/Postgres_Data/PostgreSQL/16-DML/data ). Beside that, they run 
independently on 2 different ports, specified in the postgresql.conf.


> ...run them under different users on the system.

Are you referring to the "postgres" user / role? Does that also mean setting up 
2 postgres installation directories?


> ...what are the contents of the restore script?

## stop cluster A
ssh postgres@10.0.0.1 '/Library/PostgreSQL/16/bin/pg_ctl -D 
/Volumes/Postgres_Data/PostgreSQL/16/data stop'

## save config files (ARC_postgresql_16.conf is included in postgresql.conf and 
contains cluster-specific information like the port number)
ssh postgres@10.0.0.1 'cd /Volumes/Postgres_Data/PostgreSQL/16/data && cp 
ARC_postgresql_16.conf ../ARC_postgresql_16.conf'
ssh postgres@10.0.0.1 'cd /Volumes/Postgres_Data/PostgreSQL/16/data && cp 
pg_hba.conf ../pg_hba.conf'

## clear data directory
ssh postgres@10.0.0.1 'rm -r /Volumes/Postgres_Data/PostgreSQL/16/data/*'

## transfer recovery (this will copy the backup "20240312T040106" and any 
lingering WAL files into the Data folder)
barman recover --remote-ssh-command 'ssh postgres@10.0.0.1' pg 20240312T040106 
/Volumes/Postgres_Data/PostgreSQL/16/data

## restore config files
ssh postgres@10.0.0.1 'cd /Volumes/Postgres_Data/PostgreSQL/16/data && cd .. && 
mv ARC_postgresql_16.conf 
/Volumes/Postgres_Data/PostgreSQL/16/data/ARC_postgresql_16.conf'
ssh postgres@10.0.0.1 'cd /Volumes/Postgres_Data/PostgreSQL/16/data && cd .. && 
mv pg_hba.conf /Volumes/Postgres_Data/PostgreSQL/16/data/pg_hba.conf'

## start cluster A
ssh postgres@10.0.0.1 '/Library/PostgreSQL/16/bin/pg_ctl -D 
/Volumes/Postgres_Data/PostgreSQL/16/data start > /dev/null'


This script runs on a daily basis at 4:30 AM. It did so this morning and there 
was no issue with cluster B. So even though the issue is most likely related to 
the script, it does not cause it every time.


Best regards,

Nick Renders





Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-03-22 Thread Nick Renders
On 13 Mar 2024, at 12:35, Stephen Frost wrote:

> Greetings,
>
> * Nick Renders (postg...@arcict.com) wrote:
>>> ...run them under different users on the system.
>>
>> Are you referring to the "postgres" user / role? Does that also mean setting 
>> up 2 postgres installation directories?
>
> Yes, two separate MacOS user accounts is what I was suggesting.  You
> could use the same postgres binaries though, no need to have two
> installation of them.  You'd need seperate data directories, of course,
> as you have currently.
>
>> This script runs on a daily basis at 4:30 AM. It did so this morning and 
>> there was no issue with cluster B. So even though the issue is most likely 
>> related to the script, it does not cause it every time.
>
> Seems likely that it's some sort of race condition.
>
> Thanks,
>
> Stephen

We now have a second machine with this issue: it is an Intel Mac mini running 
macOS Sonoma (14.4) and PostgreSQL 16.2.
This one only has a single Data directory, so there are no multiple instances 
running.

I installed Postgres yesterday and restored a copy from our live database in 
the Data directory. The Postgres process started up without problems, but after 
40 minutes it started throwing the same errors in the log:

2024-03-21 11:49:27.410 CET [1655] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-03-21 11:49:46.955 CET [1760] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-03-21 11:50:07.398 CET [965] LOG:  could not open file 
"postmaster.pid": Operation not permitted; continuing anyway

I stopped and started the process, and it continued working again until around 
21:20, when the issue popped up again. I wasn't doing anything on the machine 
at that time, so I have no idea what might have triggered it.

Is there perhaps some feature that I can enable that logs which processes use 
these 2 files?

Thanks,

Nick Renders




Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-03-28 Thread Nick Renders

On 22 Mar 2024, at 17:00, Alban Hertroys wrote:

On Fri, 22 Mar 2024 at 15:01, Nick Renders  
wrote:




We now have a second machine with this issue: it is an Intel Mac mini
running macOS Sonoma (14.4) and PostgreSQL 16.2.
This one only has a single Data directory, so there are no multiple
instances running.



I don't think that having a single Data directory prevents multiple
instances from running. That's more of a matter of how often pg_ctl 
was

called with the start command for that particular data directory.


I installed Postgres yesterday and restored a copy from our live 
database

in the Data directory.



How did you restore that copy? Was that a file-based copy perhaps? 
Your

files may have incorrect owners or permissions in that case.


The Postgres process started up without problems, but after 40 
minutes it

started throwing the same errors in the log:

2024-03-21 11:49:27.410 CET [1655] FATAL:  could not open 
file

"global/pg_filenode.map": Operation not permitted
2024-03-21 11:49:46.955 CET [1760] FATAL:  could not open 
file

"global/pg_filenode.map": Operation not permitted
2024-03-21 11:50:07.398 CET [965] LOG:  could not open file
"postmaster.pid": Operation not permitted; continuing anyway



It's possible that some other process put a lock on these files. 
Spotlight

perhaps? Or TimeMachine?


I stopped and started the process, and it continued working again 
until
around 21:20, when the issue popped up again. I wasn't doing anything 
on
the machine at that time, so I have no idea what might have triggered 
it.


Is there perhaps some feature that I can enable that logs which 
processes

use these 2 files?



IIRC, MacOS comes shipped with the lsof command, which will tell you 
which

processes have a given file open. See man lsof.

--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.



I have tried the lsof command, but it returns no info about the 
postmaster.pid and global/pg_filenode.map files, so I take they are not 
open at that moment.


Spotlight indexing has been disabled, and TimeMachine takes no snapshots 
of the volume where the data resides.


Looking at the 2 machines that are having this issue (and the others 
that don't), I think it is somehow related to the following setup:

- macOS Sonoma (14.4 and 14.4.1)
- data directory on an external drive

That external drive (a Promise RAID system in one case, a simple SSD in 
the other) has the option "ignore ownership" on by default. I have tried 
disabling that, and updating the data directory to have owner + 
read/write access for the postgres user. It seemed to work at first, but 
just now the issue re-appeared again.


Any other suggestions?

Thanks,

Nick Renders

Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-04-03 Thread Nick Renders
On 29 Mar 2024, at 4:25, Thomas Munro wrote:
>
> I don't have any specific ideas and I have no idea what "ignore
> ownership" means ... what kind of filesystem is running on it?  For
> the simple SSD, is it directly connected, running a normal Apple APFS
> filesystem, or something more complicated?
>
> I wonder if this could be related to the change in 16 which started to
> rename that file:
>
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d8cd0c6c95c0120168df93aae095df4e0682a08a
>
> Did you ever run 15 or earlier on that system?


In the macOS Finder, when you show the Info (command+i) for an external drive 
(or any partition that is not the boot drive), there is a checkbox "Ignore 
ownership on this volume" in the Permissions section. I think it is by default 
"on" for external drives.

The external SSD is an Orico drive that is connected with USB-C. It is 
initialised as a GUID Partition Map with a single AFPS partition.

We have run PostgreSQL 15 and earlier, before upgrading to 16 when it came out 
last year. We didn't have any problems with 16 until recently, after upgrading 
to Sonoma.


Nick




TRUNCATE memory leak with temporary tables?

2021-05-27 Thread Nick Muerdter
I've been seeing what looks like unbounded memory growth (until the OOM killer 
kicks in and kills the postgres process) when running a pl/pgsql function that 
performs TRUNCATE statements against various temporary tables in a loop. I 
think I've been able to come up with some fairly simple reproductions of the 
issue in isolation, but I'm trying to figure out if this is a memory leak or of 
I'm perhaps doing something wrong with tuning or other settings.

What I've observed:

- The memory growth occurs if the temp table has indexes or a primary key set 
on it.
- Alternatively, the memory growth also occurs if the temp table has certain 
column types on it (eg, "text" types).
- If the table doesn't have indexes and only has integer columns present, then 
the memory growth does *not* occur.
- I originally saw this against a PostgreSQL 12 server, but I've tested this 
against PostgreSQL 9.6.22, 12.7, and 13.3 Docker containers and reproduced it 
against all versions in the containers.

Here are 2 separate examples that seem to show the memory growth on the server 
(the first being a table with a "text" column, the second example having no 
text column but a primary key index):

DO $$
  DECLARE
i bigint;
  BEGIN
CREATE TEMPORARY TABLE pg_temp.foo (id integer, bar text);

FOR i IN 1..2 LOOP
  TRUNCATE pg_temp.foo;
END LOOP;
  END
$$

DO $$
  DECLARE
i bigint;
  BEGIN
CREATE TEMPORARY TABLE pg_temp.foo (id integer);
ALTER TABLE pg_temp.foo ADD PRIMARY KEY (id);

FOR i IN 1..2 LOOP
  TRUNCATE pg_temp.foo;
END LOOP;
  END
$$

Compare that to this example (which doesn't have an index or any other column 
types that trigger this), which does *not* show any memory growth:

DO $$
  DECLARE
i bigint;
  BEGIN
CREATE TEMPORARY TABLE pg_temp.foo (id integer);

FOR i IN 1..2 LOOP
  TRUNCATE pg_temp.foo;
END LOOP;
  END
$$

Any help in determining what's going on here (or if there are other ways to go 
about this) would be greatly appreciated!

Thank you!
Nick




Re: TRUNCATE memory leak with temporary tables?

2021-05-28 Thread Nick Muerdter
On Fri, May 28, 2021, at 7:22 AM, Tom Lane wrote:
> The text column would cause the table to have an associated toast table [1],
> which in turn would have an index.  Both of those would be reallocated as
> new files on-disk during TRUNCATE, just like the table proper.
> 
> A plausible theory here is that TRUNCATE leaks some storage associated
> with an index's relcache entry, but not any for a plain table.
> 
>   regards, tom lane
> 
> [1] https://www.postgresql.org/docs/current/storage-toast.html

Yeah, I forgot to mention this originally, but I see memory growth against a 
"varchar(501)" field, but *not* against a "varchar(500)" field, so I was 
wondering if there was some length threshold that triggered something with 
toast table behavior somehow involved. But if the toast table involves an 
index, then maybe all of this gets back to just the indexes like you say.

And I originally thought this issue was limited to temp tables, but now I'm not 
so sure. I seem to be able to reproduce the memory growth against regular 
tables (both normal and UNLOGGED) too:

DO $$
  DECLARE
i bigint;
  BEGIN
CREATE TABLE public.foo (id integer, bar text);

FOR i IN 1..2 LOOP
  TRUNCATE public.foo;
END LOOP;
  END
$$

The memory growth seems to be slower in this case, so maybe that's why I didn't 
catch it earlier, but I think it's maybe growing at the same rate, it's just 
that this loop goes slower against real tables than temp tables. For example, I 
see similar total memory growth by the time this reaches 100,000 loops for 
either temp or non-temp tables, the temp version just reaches that point a lot 
more quickly (which makes sense).

Thanks!
Nick




pg_upgrade - fe_sendauth: no password supplied

2021-09-06 Thread Nick Renders

Hello,

I have been trying to use the pg_upgrade command to update a PostgreSQL 
11 environment to 13 on macOS 11.


I have followed the steps in the documentation, but the command always 
fails when trying to connect to the original database. This is the 
command that is sent:


	/Library/PostgreSQL/13/bin/pg_upgrade -b /Library/PostgreSQL/11/bin -B 
/Library/PostgreSQL/13/bin -d /Volumes/Postgres_Data/PostgreSQL/11/data 
-D /Volumes/Postgres_Data/PostgreSQL/13/data -p 49156 -P 49155 -U 
postgres -j 24 -v


And this is what is logged:

connection to database failed: fe_sendauth: no password supplied
could not connect to source postmaster started with the command:
	"/Library/PostgreSQL/11/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D 
"/Volumes/Postgres_Data/PostgreSQL/11/data" -o "-p 49156 -b  -c 
listen_addresses='' -c unix_socket_permissions=0700 -c 
unix_socket_directories='/Volumes/Free/Upgrade'" start
	"/Library/PostgreSQL/11/bin/pg_ctl" -w -D 
"/Volumes/Postgres_Data/PostgreSQL/11/data" -o "" -m fast stop >> 
"pg_upgrade_server.log" 2>&1



According to the documentation, the connection should be established 
with the data in the .pgpass file. Its contents look like this (the 
password has been changed) :


localhost:49155:*:postgres:password1234
localhost:49156:*:postgres:password1234

The .pgpass file works without problems with the pg_dump and pg_restore 
commands, so I'm fairly certain its contents and privileges are set 
correctly.



The PostgreSQL documentation also mentions that you can update the 
pg_hba.conf file to use authentication method "peer". This has no effect 
either, however when I set it to "trust", the command goes through just 
fine.


So I have been able to do the upgrade, but I am still wondering why I 
got the error in the first place. Any idea why the .pgpass file isn't 
working with the pg_upgrade command?


Best regards,

Nick Renders




PostgreSQL 14: pg_dump / pg_restore error: could not write to the communication channel: Broken pipe

2021-10-15 Thread Nick Renders

Hello,

I have been trying to import a Postgres 11 database into Postgres 14, 
but the pg_restore command exits with the following message:


	pg_restore: error: could not write to the communication channel: Broken 
pipe


The command I sent looks like this:

	/Library/PostgreSQL/14/bin/pg_restore -h localhost -p 48100 -U postgres 
-w -C -d template1 -j 24 -Fd /Volumes/Migration/dbname --verbose



It seems that the multiple jobs parameter is the cause. If I specify "-j 
1", the command works without problems. If I specify "-j 2" or higher, I 
get the above error after a few seconds.


Postgres is running on a Mac Pro 12-core machine, so it has plenty of 
resources at its disposal. The config file is a copy of the Postgres 11 
configuration, which has no problem with multiple jobs.



Furthermore, the pg_dump command seems to have the same issue as well. 
The following command:


	/Library/PostgreSQL/14/bin/pg_dump -h localhost -p 48100 -U postgres -w 
ServicePGR_UTF8 -j 24 -Fd -f /Volumes/Migration/dbname --verbose


will stop prematurely with the following error:

	pg_dump: error: could not write to the communication channel: Broken 
pipe



Does this sound familiar to anyone? Is it an issue with the new Postgres 
14 release, or is there something else that might be causing this?


Best regards,

Nick Renders




Re: PostgreSQL 14: pg_dump / pg_restore error: could not write to the communication channel: Broken pipe

2021-10-18 Thread Nick Renders

Thank you for all the feedback and suggestions.

It seems that the "-h localhost" parameter is triggering the issue. If I 
leave it out, pg_restore works without problems with multiple jobs. I 
have also tried specifying the IP number instead of "localhost", but 
that results in the same error.


I see now that our original pg_restore script does not include the -h 
parameter. Somehow, it has snuck in my commands when testing Postgres 
14. That might mean that the same issue exists in previous versions as 
well. I will investigate further.


Nick


On 15 Oct 2021, at 19:08, Tom Lane wrote:


Alvaro Herrera  writes:
Yeah, pg_dump in parallel mode uses a pipe to communicate between 
leader
and workers; the error you see is what happens when a write to the 
pipe

fails.  It sounds to me like something in the operating system is
preventing the pipes from working properly.


BTW, I think a more likely explanation is "one of the pg_dump or
pg_restore worker processes crashed".  Why that should be is still
a mystery though.

regards, tom lane





Re: Advice on using materialized views

2021-12-06 Thread Nick Cleaton
On Mon, 6 Dec 2021 at 18:48, Phil Endecott
 wrote:
>
> - I have a raw data table, indexed on a timestamp column, to which
>   new data is regularly being added.
>
> - I have a set of views that produce hourly/daily/monthly summaries
>   of this data. Querying these views is slow, so I'd like to make
>   them materialized views.
>
> - But I'd like the most recent data to be included in the results
>   of my queries. So I think I need a combined view that is the
>   union of the materialized view and a non-materialised view for
>   the recent values.

Assuming your table is insert-only:

How about instead of using a materialized view at all, you define a
table of hourly summaries which your script updates, and define a view
which merges that with an on-the-fly summary of main table rows newer
than the most recent summarised hour.




PostgreSQL needs percentage function

2017-12-18 Thread Nick Dro
Hi,
Why PostgreSQL doesn't have build-in function to calculate percentage? somthing like percent(number,%
for example:
select percent(100,1) will calculate 1% of 100 = 1
select percent(25,20) will calculate 20% of 25 = 5
 
Seems like a nice addition to the math functions list:
https://www.postgresql.org/docs/9.5/static/functions-math.html
 
This is veryhelpull function, many uses percentage calculation in thier work and it will simplify the process. Percentage calculation is considered a basic math operation and I think postgresql should support it as a build-in function.
Seems like easy to implment isn't it?



RE: Re: PostgreSQL needs percentage function

2017-12-18 Thread Nick Dro

Hi,
I know how to implement this. It's not the issue.
It's very easy to implement absolute value as well yet still PostgreSQL gives abs(x) function which is build in function.
My claim is that if there is a build in function for absolute value why not for percentage? Both are very basic mathematical operations.
 
Can you give a good reason why absolute value has a build in function while percentage is not?
ב דצמ׳ 18, 2017 17:44, hubert depesz lubaczewski כתב:On Mon, Dec 18, 2017 at 02:23:38PM +0200, Nick Dro wrote:
>Hi,
>Why PostgreSQL doesn't have build-in function to calculate percentage?
>somthing like percent(number,%
>for example:
>select percent(100,1) will calculate 1% of 100 = 1
>select percent(25,20) will calculate 20% of 25 = 5
What is the problem with using normal multiplication for this?
depesz



RE: Re: PostgreSQL needs percentage function

2017-12-19 Thread Nick Dro


This is exactly why I think there should be some build-in function for that...
Percentage calculation exists in almost any databse and information system - it requires from use to implement many functions on thier own for something that is very basic.
The general idea of my request is that postgresql should have build in function for percentage calculation it doesn't have to me the exact example I gave. Any thing will be better than none.
 
 


ב דצמ׳ 18, 2017 18:28, Michael Nolan כתב:



On Mon, Dec 18, 2017 at 6:23 AM, Nick Dro <postgre...@walla.co.il > wrote:


Hi,
Why PostgreSQL doesn't have build-in function to calculate percentage? somthing like percent(number,%
for example:
select percent(100,1) will calculate 1% of 100 = 1
select percent(25,20) will calculate 20% of 25 = 5
 
Seems like a nice addition to the math functions list:
https://www.postgresql.org/docs/9.5/static/functions-math.html
 
This is veryhelpull function, many uses percentage calculation in thier work and it will simplify the process. Percentage calculation is considered a basic math operation and I think postgresql should support it as a build-in function.
Seems like easy to implment isn't it?




 
It's a bit trickier than that, because you';ll have to deal with integers, real, double precision, etc.  You may also want to deal with null values.  I found it more useful to write a function that displays X as a percentage of Y, rounded to 1 digit to the right of the decimal point.
--
Mike Nolan







RE: Re: Re: PostgreSQL needs percentage function

2017-12-20 Thread Nick Dro


 I said: "Percentage calculation exists in almost any databse and information system"
I didn't say it exists in any RDB. I meant that any system that is using databse like information system uses percentace calculation therefor if most of the users of the databse need such function it make sence to have it in it's base code - ready for use rather than ask each one to implment his own.
 
 


ב דצמ׳ 19, 2017 17:51, Melvin Davidson כתב:



On Tue, Dec 19, 2017 at 4:54 AM, Pavel Stehule <pavel.steh...@gmail.com > wrote:



2017-12-19 10:13 GMT+01:00 Nick Dro <postgre...@walla.co.il >:




This is exactly why I think there should be some build-in function for that...
Percentage calculation exists in almost any databse and information system - it requires from use to implement many functions on thier own for something that is very basic.
The general idea of my request is that postgresql should have build in function for percentage calculation it doesn't have to me the exact example I gave. Any thing will be better than none.




 
I don't agree - there is not compatibility or portability reasons. The calculation using numeric operators is pretty simple, and possibility to create own function is here too.
 
So there is not any reason to increase a postgresql catalogue.
 
Please. don't do top post
 
Regards
 
Pavel
 




 
 


ב דצמ׳ 18, 2017 18:28, Michael Nolan כתב:



On Mon, Dec 18, 2017 at 6:23 AM, Nick Dro <post= < gre...@walla.co.il > wrote:


Hi,
Why PostgreSQL doesn't have build-in function to calculate percentage? somthing like percent(number,%
for example:
select percent(100,1) will calculate 1% of 100 = 1
select percent(25,20) will calculate 20% of 25 = 5
 
Seems like a nice addition to the math functions list:
https://www.postgresql.org/docs/9.5/static/functions-math.html
 
This is veryhelpull function, many uses percentage calculation in thier work and it will simplify the process. Percentage calculation is considered a basic math operation and I think postgresql should support it as a build-in function.
Seems like easy to implment isn't it?




 
It's a bit trickier than that, because you';ll have to deal with integers, real, double precision, etc.  You may also want to deal with null values.  I found it more useful to write a function that displays X as a percentage of Y, rounded to 1 digit to the right of the decimal point.
--
Mike Nolan











>Percentage calculation exists in almost any databse and information system
 
That is not exactly true. AFAIK, only Oracle has a Percentage function. SQL Server and MySQL do not. 
It has already been shown that it is just as easy to code percentage inline  (EG: SELECT (50.8 x 5.2) / 100 AS pct;   ## .026416 
as it is to call a function SELECT pct(50.8, 5.2); 
Please do not false statements to justify a request for a non-needed enhancement.
 
 
 
-- 

Melvin DavidsonI reserve the right to fantasize.  Whether or not you  wish to share my fantasy is entirely up to you. 









Re: Question about buffers_alloc in pg_stat_bgwriter view for monitoring

2018-03-29 Thread Gunnar &quot;Nick" Bluth
Am 28.03.2018 um 23:38 schrieb Alvar Freude:
> Hi all,
> 
> Can someone tell me, what the value of buffers_alloc in the pg_stat_bgwriter 
> view 
> (https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-BGWRITER-VIEW)
>  is exactly? Is this the amount of shared buffers used by the bgwriter?

As I had to research that anyway, there's no reason not to write this
down here as well... (probably simplified, but I'm quite confident the
information is correct ;-):

Whenever a buffer is allocated, a global counter is incremented (see
"StrategyGetBuffer" in storage/buffer/freelist.c. That counter is used
by the BGWriter to determine its own wakeup/hibernate strategy, and
on-the-fly written to the global stats.

Thus, buffer_alloc is the global count of buffers allocated in the
cluster. That it appears in the bgwriter statistics is more or less
coincidental.

Best regards,
-- 
Gunnar "Nick" Bluth
RHCE/SCLA

Mobil +49 172 8853339
Email: gunnar.bl...@pro-open.de
_
In 1984 mainstream users were choosing VMS over UNIX.
Ten years later they are choosing Windows over UNIX.
What part of that message aren't you getting? - Tom Payne




signature.asc
Description: OpenPGP digital signature


SELECT of pseudo hex value gives unexpected result

2019-01-29 Thread Gunnar &quot;Nick" Bluth
Hi,

I found this in an SQL-injection attempt today:
union select 0x5e73266725,0x5e73266725[,...],0x5e73266725;

Tried
SELECT 0x5e73266725;

and received:
-[ RECORD 1 ]--
x5e73266725 | 0

That was not what I expected... is this expected/documented behaviour?

Thx in advance!
-- 
Gunnar "Nick" Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



signature.asc
Description: OpenPGP digital signature


Re: SELECT of pseudo hex value gives unexpected result

2019-01-29 Thread Gunnar &quot;Nick" Bluth
Am 29.01.2019 um 17:39 schrieb Tom Lane:
> "Gunnar \"Nick\" Bluth"  writes:
>> Tried
>> SELECT 0x5e73266725;

[...]

> SELECT 0 AS x5e73266725;
> 
> and that's the result you got.

Well, yeah, _that_ was pretty obvious. I just didn't expect ot to happen...

> I think that the SQL standard considers adjacent tokens to be invalid
> unless one of them is punctuation (e.g. 1+2), but our lexer is a bit
> less rigid about that.

it kind of comforts me that it's at least not defined like that in the
standard ;-)

Cheers anyway!
-- 
Gunnar "Nick" Bluth
RHCE/SCLA

Mobil   +49 172 8853339
Email: gunnar.bl...@pro-open.de
__
In 1984 mainstream users were choosing VMS over UNIX.  Ten years later
they are choosing Windows over UNIX.  What part of that message aren't you
getting? - Tom Payne



signature.asc
Description: OpenPGP digital signature