Re: Enforce primary key on every table during dev?

2018-02-28 Thread Achilleas Mantzios

On 28/02/2018 15:34, Jeremy Finzel wrote:
We want to enforce a policy, partly just to protect those who might forget, for every table in a particular schema to have a primary key.  This can't be done with event triggers as far as I can see, 
because it is quite legitimate to do:


BEGIN;
CREATE TABLE foo (id int);
ALTER TABLE foo ADD PRIMARY KEY (id);
COMMIT;

It would be nice to have some kind of "deferrable event trigger" or some way to 
enforce that no transaction commits which added a table without a primary key.

Any ideas?

cron job to check for tables without PK ? Although for a short period the 
offending table would be there live without a PK.
But IMO every table, in addition to PK, should have also natural unique keys as 
much as possible.
Better safe than sorry.


Thanks,
Jeremy



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: normalizing a float array?

2018-03-06 Thread Achilleas Mantzios

On 06/03/2018 18:12, Celia McInnis wrote:

Hi - I can write a stored procedure to normalize float arrays, but is there a 
nicer way of doing this in postgresql?


Last time I checked (1-2 yrs ago) no.

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Prefixing schema name

2018-03-07 Thread Achilleas Mantzios

On 08/03/2018 01:13, David G. Johnston wrote:

On Wed, Mar 7, 2018 at 4:05 PM, Tiffany Thang mailto:tiffanyth...@gmail.com>>wrote:

​
The search_path configuration works only for queries.


​Um

​https://www.postgresql.org/docs/10/static/sql-createschema.html

"​A CREATE command specifying an unqualified object name creates the object in the 
current schema (the one at the front of the search path, which can be determined with the 
function current_schema)"



In pg_dump relies heavily on search_path for all CREATE statements.



For example:
Is there a way to run the create/insert statements below without prefixing 
the schema name, user1?​


As the user, user1:
Create table user1.table1 (id int);
Insert into user1.table1 values (1);


​Just omitting "user1" and seeing what happens would be informative.  You should find it 
does exactly what you think - namely because the default search_path will cause "user1" 
to appear first.

Insert is more similar to Select than it is to Create - the object being 
inserted into must already exist

David J.
​​



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: primary key and unique index

2018-03-23 Thread Achilleas Mantzios

On 23/03/2018 09:55, Thomas Poty wrote:

Hi all,

I am migrating fromMySQL to Postgresql 9.6.

In MySQL a  "show create table" gives me :
 ...
  PRIMARY KEY (`ID`,`CountryCode`,`LanguageCode`),
  UNIQUE KEY `unique_my table_4` (`ID`,`CountryCode`,`LanguageCode`),
...

So, In PostgreSQL, does it make sense to create a primary key AND a unique 
index based on the same columns?
Is PostgreSQL smart enough to use the unique index created for the primary key.



This is redundant. However, IMO it is always beneficial to have an bigint PK, 
set implicitly via a sequence.
So you could have smth like :

pkid BIGSERIAL PRIMARY KEY,
...
UNIQUE KEY unique_my table_4 ("ID","CountryCode","LanguageCode"),
...
This way, you get the artificial bigint PK (pkid), and the explicit natural 
unique key which enforces your business integrity.


I know PostgreSQL can be based on a unique index to create a primary key but I 
also know it is possible to create several indexes on the same columns with the 
same order.

Thanks

Thomas



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Question about buffers_alloc in pg_stat_bgwriter view for monitoring

2018-03-30 Thread Achilleas Mantzios

On 29/03/2018 20:24, Alvar Freude wrote:

Hi,


Am 29.03.2018 um 10:30 schrieb Gunnar Nick Bluth :

Thus, buffer_alloc is the global count of buffers allocated in the
cluster. That it appears in the bgwriter statistics is more or less
coincidental.

But it seems not to be the total shared_buffers used, but the total number of 
allocated and re-allocated buffers. So it increments every time a buffer is 
allocated. Maybe I’m the only one who misunderstands it – or someone with 
better english then me should update the docs. ;-)

But shared_buffers represents the max no of postgresql buffers, per server, at any given time. It is a limit (constant) not a metric. The count of all concurrent buffers at any given time must be less 
than shared_buffers.

So my question is : Does buffer_alloc represent the total read/write traffic of 
the database since the last reset?


Ciao
  Alvar

--
Alvar C.H. Freude | http://alvar.a-blast.org
https://blog.alvar-freude.de/
https://www.wen-waehlen.de/



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Autovacuum Problem

2018-04-02 Thread Achilleas Mantzios

On 03/04/2018 09:36, Kein Name wrote:

However no space seems to be freed to the system.

Is there any way a bloody newbie can debug this behaviour?


VACUUM <> VACUUM FULL
Normally running VACUUM via autovacuum should help reuse free space but not 
actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
Ppl in normal/average type of installations/workloads no longer (since 8.2) run 
VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where e.g. need 
for free space is urgent.


Any help is appreciated!

Thanks
Stefan



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Autovacuum Problem

2018-04-03 Thread Achilleas Mantzios

On 03/04/2018 10:00, Kein Name wrote:

> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but not 
actually return it to the filesystem / OS (unless it happens to be the last blocks 
in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since 8.2) 
run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where e.g. 
need for free space is urgent.

H then when is the space released to the system when the manual VACUUM FULL run is 
not done? Any way to "force" this?

Why would you want that? Do you have any control over the application? Any 
"special" patterns used in the app?


2018-04-03 8:49 GMT+02:00 Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 03/04/2018 09:36, Kein Name wrote:

However no space seems to be freed to the system.

Is there any way a bloody newbie can debug this behaviour?


VACUUM <> VACUUM FULL
Normally running VACUUM via autovacuum should help reuse free space but not 
actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
Ppl in normal/average type of installations/workloads no longer (since 8.2) 
run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where e.g. 
need for free space is urgent.


Any help is appreciated!

    Thanks
Stefan


-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Autovacuum Problem

2018-04-03 Thread Achilleas Mantzios

On 03/04/2018 10:54, Kein Name wrote:

> Why would you want that? Do you have any control over the application? Any 
"special" patterns used in the app?

Drive is running full :/
Sadly I have no control and knowledge whatsoever over/about the application.

I tuned the autovacuum parameters now for the critical tables, to have it run 
more often and using greater ressources while doing so but I am unsure if 
this helps with the bloating.


You could try to setting log_statement=all for a (short) period with a typical 
"heavy-ish" write activity to know what the app is doing.
IMHO you must prepare for a larger storage. After that you should monitor disk 
usage periodically to see if usage is stabilized or at least if the increase 
rate drops.



2018-04-03 9:39 GMT+02:00 Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 03/04/2018 10:00, Kein Name wrote:

> VACUUM <> VACUUM FULL
> Normally running VACUUM via autovacuum should help reuse free space but 
not actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
> Ppl in normal/average type of installations/workloads no longer (since 
8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where 
e.g. need for free space is urgent.

H then when is the space released to the system when the manual VACUUM FULL run 
is not done? Any way to "force" this?

Why would you want that? Do you have any control over the application? Any 
"special" patterns used in the app?


2018-04-03 8:49 GMT+02:00 Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>>:

On 03/04/2018 09:36, Kein Name wrote:

However no space seems to be freed to the system.

Is there any way a bloody newbie can debug this behaviour?


VACUUM <> VACUUM FULL
Normally running VACUUM via autovacuum should help reuse free space but 
not actually return it to the filesystem / OS (unless it happens to be the last 
blocks in the data file(s)).
Ppl in normal/average type of installations/workloads no longer (since 
8.2) run VACUUM (or VACUUM FULL) by hand, except in extra ordinary cases where 
e.g. need for free space is urgent.


Any help is appreciated!

    Thanks
Stefan


-- 
Achilleas Mantzios

    IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





-- 
Achilleas Mantzios

    IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Postgresql Split Brain: Which one is latest

2018-04-10 Thread Achilleas Mantzios

On 10/04/2018 16:50, Vikas Sharma wrote:

Hi,

We have postgresql 9.5 with streaming replication(Master-slave) and automatic failover. Due to network glitch we are in master-master situation for quite some time. Please, could you advise best way 
to confirm which node is latest in terms of updates to the postgres databases.

The one with the latest timeline.


Regards
Vikas Sharma



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Login with LDAP authentication takes 5 seconds

2018-05-31 Thread Achilleas Mantzios

On 28/05/2018 17:26, Andreas Schmid wrote:

Hi,

I configured my PostgreSQL 10 DB on Debian 9.2 with LDAP authentication (simple bind mode). While this basically works, it has the strange effect that the first login with psql takes around 5 
seconds. When I reconnect within 60 seconds, the login completes immediately.


The LDAP server is behind a firewall. So for a test, in pg_hba.conf I put the LDAP servers IP address instead of its DNS name (for parameter ldapserver). Like that, all logins complete immediately. 
But in general I prefer specifying the DNS name rather than the IP.


When I checked on the DB machine with the following commands
host my.ldap.server.org <http://my.ldap.server.org>
dig my.ldap.server.org <http://my.ldap.server.org>
both always returned the host name and IP address of the LDAP server 
immediately.

Does anyone of you have an explanation for this, or a hint, where I could do 
some further investigation?

IPv4 vs IPv6 ? any strange timeouts? look in the postgresql logs for any 
messages.
Also definitely ran wireshark, it'll tell you a lot on what's happening between 
postgresql and your LDAP .


Thanks,
Andy



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Microsoft buys GitHub, is this a threat to open-source

2018-06-07 Thread Achilleas Mantzios

On 07/06/2018 13:56, Adam Tauno Williams wrote:

On Thu, 2018-06-07 at 11:54 +0100, Paul Linehan wrote:

Only to the extent that GitHub was a threat before this. They never
embraced Open Source!
They cynically portrayed themselves as the developer's best pal when,
in fact, they are/were nothing more and nothing less than another
commerical software company!
I was planning to start using GitLab before this announcement, but
thechange will not affect my  decision one bit - it had been made
anyway!

Sourceforge!  They're entire platform is Open Source, they support git,
and they integrate with a variety of common open source packages.

Who hasn't missed sourceforge ? or ... freshmeat while we'are at it :)


https://sourceforge.net/



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Monitor repl slot size

2018-07-13 Thread Achilleas Mantzios

On 13/07/2018 13:19, Nicola Contu wrote:

Hello,
we used to monitor the replication slot size on postgres 9.6.6 with the 
following query:

SELECT pg_xlog_location_diff(pg_current_xlog_location(), restart_lsn) FROM 
pg_replication_slots WHERE slot_name = 'SLOT NAME';

We are moving to postgres 10.4 and we saw the pg_xlog_location_diff is not 
there anymore.


location -> lsn
xlog -> wal
SELECT pg_wal_lsn_diff(pg_current_wal_insert_lsn(), restart_lsn) FROM 
pg_replication_slots;

I know we can re-create it following this link : 
https://github.com/DataDog/integrations-core/issues/907

but, is there any better way to do it? Any replacement for that function on 
postgres 10?

Thanks a lot,
Nicola



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Can't compile postgresql 11 on FreeBSD 11.1

2018-07-22 Thread Achilleas Mantzios

On 17/07/2018 23:25, Tom Lane wrote:

Adrian Klaver  writes:

On 07/17/2018 12:34 PM, Márcio Antônio Sepp wrote:

I'm trying to compile PostgreSQL 11beta2 but this errors occur:
checking readline.h usability... no

Looks like you need whatever is the FreeBSD equivalent of readline-dev(el).

AFAICT FreeBSD doesn't do things that way.  On a nearby machine, I see

$ pkg which /usr/local/include/readline/readline.h
/usr/local/include/readline/readline.h was installed by package readline-6.3.8

What's more likely the problem is that FreeBSD insists on installing
packages under /usr/local, but it does *not* set that up to be part of
gcc's default search paths.  (Security 1, usability 0.)  You need these
configure flags to do much of anything on that platform:

   --with-includes=/usr/local/include --with-libs=/usr/local/lib

For ages I have been manually compiling PostgreSQL under FreeBSD with only :
./configure --with-ldap

But in my case I tend to fall back to gcc instead of the native clang/llvm . 
That might be his issue.



regards, tom lane



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Can't compile postgresql 11 on FreeBSD 11.1

2018-07-22 Thread Achilleas Mantzios

On 23/07/2018 09:22, Achilleas Mantzios wrote:

On 17/07/2018 23:25, Tom Lane wrote:

Adrian Klaver  writes:

On 07/17/2018 12:34 PM, Márcio Antônio Sepp wrote:

I'm trying to compile PostgreSQL 11beta2 but this errors occur:
checking readline.h usability... no

Looks like you need whatever is the FreeBSD equivalent of readline-dev(el).

AFAICT FreeBSD doesn't do things that way.  On a nearby machine, I see

$ pkg which /usr/local/include/readline/readline.h
/usr/local/include/readline/readline.h was installed by package readline-6.3.8

What's more likely the problem is that FreeBSD insists on installing
packages under /usr/local, but it does *not* set that up to be part of
gcc's default search paths.  (Security 1, usability 0.)  You need these
configure flags to do much of anything on that platform:

   --with-includes=/usr/local/include --with-libs=/usr/local/lib

For ages I have been manually compiling PostgreSQL under FreeBSD with only :
./configure --with-ldap

But in my case I tend to fall back to gcc instead of the native clang/llvm . 
That might be his issue.

It seems to be the case. With GCC installed, by default I get no problems :
root@smadev:/usr/local/src/postgresql-11beta2# uname -a
FreeBSD smadev.internal.net 11.1-RELEASE-p4 FreeBSD 11.1-RELEASE-p4 #0: Tue Nov 
14 06:12:40 UTC 2017 
r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC amd64
root@smadev:/usr/local/src/postgresql-11beta2#
root@smadev:/usr/local/src/postgresql-11beta2# cd 
/usr/local/src/postgresql-11beta2
root@smadev:/usr/local/src/postgresql-11beta2# ./configure
.
config.status: linking src/makefiles/Makefile.freebsd to src/Makefile.port
root@smadev:/usr/local/src/postgresql-11beta2# echo $?
0

By looking into config.log :

configure:3885: checking for gcc
configure:3901: found /usr/local/bin/gcc
configure:3912: result: gcc
configure:3943: checking for C compiler version
configure:3952: gcc --version >&5
gcc (FreeBSD Ports Collection) 6.4.0

Now, if I configure for clang I get the error :
root@smadev:/usr/local/src/postgresql-11beta2# setenv CC cc
root@smadev:/usr/local/src/postgresql-11beta2# ./configure
..
checking for readline/readline.h... no
configure: error: readline header not found
If you have libedit already installed, see config.log for details on the
failure.  It is possible the compiler isn't looking in the proper directory.
Use --without-readline to disable libedit support.
root@smadev:/usr/local/src/postgresql-11beta2#

As suggested, when using clang, the way to overcome is via :
./configure --with-includes=/usr/local/include --with-libs=/usr/local/lib





    regards, tom lane





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




ALTER ROLE SET search_path produced by pg_dumpall gives : ERROR: syntax error at or near "$" .

2018-07-30 Thread Achilleas Mantzios

This is with PostgreSQL 10.4.
How to reproduce :
postgres@smadev:~% psql
psql (10.4)
Type "help" for help.

Alter the role for search path :
dynacom=# ALTER ROLE amura3 SET search_path TO "$user", amuragents, public;
ALTER ROLE
dynacom=#

Verify :
postgres@smadev:~% psql -U amura3
Password for user amura3:
psql (10.4)
Type "help" for help.

dynacom=> show search_path ;
    search_path
---
 $user, amuragents, public
(1 row)

dynacom=>

pg_dumpall's output :
ALTER ROLE amura3 SET search_path TO $user, amuragents, public;

psql -f pg_dumpall_out.sql :
dynacom=# ALTER ROLE amura3 SET search_path TO $user, amuragents, public;
ERROR:  syntax error at or near "$"

Is this a bug or am I missing something ?

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: ALTER ROLE SET search_path produced by pg_dumpall gives : ERROR: syntax error at or near "$" .

2018-07-30 Thread Achilleas Mantzios

On 30/07/2018 16:23, Adrian Klaver wrote:

On 07/30/2018 05:57 AM, Achilleas Mantzios wrote:

This is with PostgreSQL 10.4.
How to reproduce :
postgres@smadev:~% psql
psql (10.4)
Type "help" for help.

Alter the role for search path :
dynacom=# ALTER ROLE amura3 SET search_path TO "$user", amuragents, public;
ALTER ROLE
dynacom=#

Verify :
postgres@smadev:~% psql -U amura3
Password for user amura3:
psql (10.4)
Type "help" for help.

dynacom=> show search_path ;
 search_path
---
  $user, amuragents, public
(1 row)

dynacom=>

pg_dumpall's output :
ALTER ROLE amura3 SET search_path TO $user, amuragents, public;

psql -f pg_dumpall_out.sql :
dynacom=# ALTER ROLE amura3 SET search_path TO $user, amuragents, public;
ERROR:  syntax error at or near "$"


Well the above does not have $user double quoted. Was that hand entered or was 
that like that in the pg_dumpall_out.sql file?



Is this a bug or am I missing something ?


I could not replicate:


You are absolutely right, I apologize for the noise :(
I had forgotten that we had to tweak src/backend/utils/misc/guc.c in order for 
search_path to work with pgbouncer in transaction mode.

--- /usr/local/src/postgresql-10.4/src/backend/utils/misc/guc.c.orig 2018-05-07 
23:51:40.0 +0300
+++ /usr/local/src/postgresql-10.4/src/backend/utils/misc/guc.c 2018-06-14 
16:07:29.832476000 +0300
@@ -3266,7 +3266,7 @@
    {"search_path", PGC_USERSET, CLIENT_CONN_STATEMENT,
    gettext_noop("Sets the schema search order for names that 
are not schema-qualified."),
    NULL,
-   GUC_LIST_INPUT | GUC_LIST_QUOTE
+   GUC_LIST_INPUT | GUC_REPORT
    },
    &namespace_search_path,
    "\"$user\", public",

With GUC_LIST_QUOTE it kept re-quoting and it broke the app. Without GUC_REPORT 
it was losing the search_path.



create role sp_test;

alter role sp_test SET search_path TO "$user", public;

pg_dumpall -g -U postgres > sp_test.sql

In sp_test.sql

CREATE ROLE sp_test;

ALTER ROLE sp_test SET search_path TO "$user", public;

drop role sp_test ;

psql -d test -U postgres  -f sp_test.sql


\du

sp_test    | Cannot login    | {}


\drds
 List of settings
  Role   | Database |  Settings
-+------+-
 sp_test |  | search_path="$user", public









--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: ALTER ROLE SET search_path produced by pg_dumpall gives : ERROR: syntax error at or near "$" .

2018-07-30 Thread Achilleas Mantzios

On 30/07/2018 16:51, Tom Lane wrote:

Achilleas Mantzios  writes:

This is with PostgreSQL 10.4.
How to reproduce :
dynacom=# ALTER ROLE amura3 SET search_path TO "$user", amuragents, public;
ALTER ROLE
...
dynacom=> show search_path ;
      search_path
---
   $user, amuragents, public
(1 row)

Hm, that's not happening for me:

You (and Adrian) are right. This is due to our own tweaking (which I had 
forgotten). Sorry for the false alarm.


regression=# alter user joe set search_path to "$user", public;
...
regression=> show search_path ;
search_path
-
  "$user", public
(1 row)

and then pg_dumpall produces

ALTER ROLE joe SET search_path TO "$user", public;

There was a relevant bug fix in March (commit 742869946) ...
are you certain this is a 10.4 server, and not 10.3 or older?

    regards, tom lane



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: ALTER ROLE SET search_path produced by pg_dumpall gives : ERROR: syntax error at or near "$" .

2018-08-01 Thread Achilleas Mantzios

On 30/07/2018 17:00, Tom Lane wrote:

Achilleas Mantzios  writes:

On 30/07/2018 16:51, Tom Lane wrote:

Hm, that's not happening for me:

You (and Adrian) are right. This is due to our own tweaking (which I had 
forgotten). Sorry for the false alarm.

It looks like your hack was to work around the bug that was fixed
properly in 742869946.  You should be able to undo that now ...

Hello Tom,
On pgsql 10.4 , I reverted (undid) this patch/hack :

root@smadev:/usr/local/src/postgresql-10.4# diff -u 
./src/backend/utils/misc/guc.c ./src/backend/utils/misc/guc.c.hack
--- ./src/backend/utils/misc/guc.c  2018-08-01 16:22:30.901629000 +0300
+++ ./src/backend/utils/misc/guc.c.hack 2018-08-01 15:45:15.893348000 +0300
@@ -3266,7 +3266,7 @@
    {"search_path", PGC_USERSET, CLIENT_CONN_STATEMENT,
    gettext_noop("Sets the schema search order for names that 
are not schema-qualified."),
    NULL,
-   GUC_LIST_INPUT | GUC_LIST_QUOTE
+   GUC_LIST_INPUT | GUC_REPORT
    },
    &namespace_search_path,
    "\"$user\", public",

And (i.e. with stock 10.4) I tested with pgbouncer again in transaction mode, and still, (i.e. without GUC_REPORT) , pgbouncer client does not get from server and does not store the search path after 
the first server connection, and therefore, after server disconnects , the second time the client tries to connect it does not set the search_path .

I tested with :
GUC_LIST_INPUT | GUC_LIST_QUOTE | GUC_REPORT
as well, but then I get the re-quote problem I was telling about :
At first connection the search path is set correctly : 
bdynacom,epaybdynacom,epay,"$user", public
but after server disconnection and re-connection the search path becomes : 
"bdynacom,epaybdynacom,epay,""$user"", public"
which is wrong.

So in order to retain all the benefits of transaction mode in pgbouncer I had 
to re-apply the first patch/hack, at the expense of the nuisance at pg_dumpall 
and search paths.




regards, tom lane



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Postgres - search for value throughout many tables?

2018-08-08 Thread Achilleas Mantzios

If the num of tables is smallish you could run smth like

select * from (
select table1::text as thecol from table1
UNION select table2::text FROM table2
UNION select table3::text FROM table3
UNION .
) as qry WHERE thecol ~* 'some pattern';

On 08/08/2018 17:09, czezz wrote:

Hi everyone,
I want to aks if anyone knows is there a way to search for specific "value" 
throughout list of tables OR all tables in databse?

Cheers,
czezz



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Audit management

2018-08-10 Thread Achilleas Mantzios

On 10/08/2018 17:01, dangal wrote:

Dear, I would like to ask you to see what you recommend
I manage a production database with a nominated user with the same
permissions as the postgres user SUPERUSER INHERIT CREATEDB CREATEROLE
REPLICATION
The client is asking us for an audit about this nominated user, could you
recommend any extension or form that you believe is the best solution,
I am currently testing pgaudit

Pgaudit. That's what we use. It seems to be a really nice tool.




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Audit management

2018-08-19 Thread Achilleas Mantzios

On 10/08/2018 21:20, dangal wrote:

Achilleas thanks for your answer
A query, the only possible way out is the postgres log?
I would like to be able to throw the audit output to a different place so
that I can not access the file

Yes, only the postgresql log, and you gotta take care of ^M (CR), CRLF in the 
data.




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Upgrade/Downgrade

2018-08-23 Thread Achilleas Mantzios

On 23/08/2018 14:30, Sonam Sharma wrote:

No, I didn't. The version is not compatible with application, so need to 
downgrade it


Have you tried to tweak postgresql.conf ?
What do you mean is not compatible? What's the problem?



On Thu, Aug 23, 2018, 4:56 PM Andreas Kretschmer mailto:andr...@a-kretschmer.de>> wrote:

On 23 August 2018 12:51:08 CEST, Sonam Sharma mailto:sonams1...@gmail.com>> wrote:
>Hello,
>
>My postgres version is 10.4 and I want to downgrade it to 9.5.
>and one is at 9.2 and need to upgrade it to 9.5.
>Can someone please help how to do this.
>Unable to find anything online

Why downgrade? Dump and restore should do the job, have you tried it?

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Upgrade/Downgrade

2018-08-23 Thread Achilleas Mantzios

On 23/08/2018 22:40, Nikolay Samokhvalov wrote:

On Thu, Aug 23, 2018 at 11:44 AM Joshua D. Drake mailto:j...@commandprompt.com>> wrote:

On 08/23/2018 04:47 AM, Achilleas Mantzios wrote:
> On 23/08/2018 14:30, Sonam Sharma wrote:
>> No, I didn't. The version is not compatible with application, so need
>> to downgrade it
>
> Have you tried to tweak postgresql.conf ?
> What do you mean is not compatible? What's the problem?

My guess would be that they have a vendor supported application that is
*only* supported on 9.5. I run into this more often than I like.


Same story.

I suppose it's related (at least partially) xlog->wal and **_location->**_lsn renamings in function names (full list made in Postgres 10: 
https://wiki.postgresql.org/wiki/New_in_postgres_10#Renaming_of_.22xlog.22_to_.22wal.22_Globally_.28and_location.2Flsn.29)

Those 10's changes are really painful.

I wish there were be some grace period, say 2-3 years, when both pg*_xlog_* 
function names would also work as aliases for pg*_wal_**.

yes this is true for scripts, monitoring, but for app? I doubt many apps depend 
on wal/lsn related functions.


BTW, if the reason of downgrading is really related only these renamings, it 
might be less stressful just to create function aleases, like:

create function pg_catalog.pg_last_xlog_receive_location() returns pg_lsn 
as $$
  select pg_current_wal_lsn();
$$ language sql;


+ granting proper permissions to specified DB roles, where needed.

Of course, it's hacky and should be only as a temporary solution until the app 
is adjusted to support Postgres 10+.
But it gives an ability to have all good things added to Postgres 9.6 and 10.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: A Timeseries Case Study: InfluxDB VS PostgreSQL

2018-09-07 Thread Achilleas Mantzios

On 07/09/2018 11:07, Fabio Pardi wrote:

Hi,

I recently published a blog article reporting a small research I made on
the usage of InfluxDB and PostgreSQL for time series, together with
Grafana on my specific use case.


I think that some of you might find it interesting, maybe inspiring or
perhaps it can trigger some interesting discussion, given the high level
of expertise of people in this mailing list.

I reached out to InfluxDB guys, but after an initial ack, I did not hear
from them any longer.


https://portavita.github.io/2018-07-31-blog_influxdb_vs_postgresql


All comments, critics, suggestions and corrections are very welcome

Nice read! Wonder if you could repeat the tests on pgsql 10.5 and btree/BRIN.



Regards,

fabio pardi



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: PG8.3->10 migration data differences

2018-09-10 Thread Achilleas Mantzios

On 10/09/2018 11:22, Csaba Ragasits wrote:

Hello,

We would like to migrate our old databases to new postgres server, with the simple backup-restore process. The process run fine without errors but when we're  checking the source and the migrated 
datas as field level, we found the following differences:


For example:

Field type: TIMESTAMP WITHOUT TIME ZONE NOT NULL
pg93: 2015-08-28 21:25:07.70
pg10: 2015-08-28 21:25:07.7

Those two are absolutely the same value. 7/10 = 70/100


Field type: REAL
pg93: 2.2
pg10: 2.2005


Those have to do with rounding. Precision for real is 6 decimal digits. Your 
difference is on the 8-th digit.

When I check the 8.3 pg_dump file, it contains the pg83 correct values.

Do you have any ideas, why different this values?

Thx,
Csaba




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Replication Issues

2018-09-27 Thread Achilleas Mantzios

On 28/9/18 9:00 π.μ., bhargav kamineni wrote:

Hi Team,

I have configured replication using slot ,But it failed by throwing the ERROR /pg_basebackup: could not get transaction log end position from server: ERROR:  requested WAL segment 
00012C9D0085 has already been removed /, which is  unexpected because i have created the slot on master first and then issued the base backup command from slave's end

the command is
/usr/lib/postgresql/9.5/bin/pg_basebackup -U  user --max-rate='150 M' 
--progress --verbose --write-recovery-conf --status-interval='10 s' -D data_dir 
 -h host_ip  -p 5433/


Either :
add  "-X stream" to the pg_basebackup command
or
increase wal_keep_segments on the server.


These are the settings on my master
archive_mode=on
archive_command='/bin/true'
wal_keep_segments=512
max_wal_senders=4
Series of steps i have followed :
1) Enabled password less authentication between master and slave
2)created slot on master (assuming it will store wal's regardless of other 
settings)
3)started basebackup from slave's end
4)Issued checkpoint at master's end

Here my concern is , slave should recover WAL from replication slot but why i got the above ERROR , Why slot removed  the requested wal file , Could you please let me know the reason why it happened 
or did i miss something ?


Thanks,
Bhargav K




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Replication Issues

2018-09-28 Thread Achilleas Mantzios

On 28/9/18 1:50 μ.μ., bhargav kamineni wrote:

Ee are using 9.5 version, slot option -s is not available ,can I go with -X 
stream option ?

-S is intended when you plan to use the backup as a hot standby, when you must 
use the same replication slot.
If you plan to use the backup as a stand alone backup you don't need this 
option anyway.


On Fri 28 Sep, 2018, 12:01 PM Laurenz Albe, mailto:laurenz.a...@cybertec.at>> wrote:

 bhargav kamineni wrote:
> Hi Team,
>
> I have configured replication using slot ,But it failed by throwing the
> ERROR  pg_basebackup: could not get transaction log end position from 
server:
> ERROR:  requested WAL segment 00012C9D0085 has already been 
removed ,
> which is  unexpected because i have created the slot on master first
> and then issued the base backup command from slave's end
> the command is
> usr/lib/postgresql/9.5/bin/pg_basebackup -U  user --max-rate='150 M' 
--progress --verbose --write-recovery-conf --status-interval='10 s' -D data_dir  
-h host_ip  -p 5433
> These  are the settings on my master
> archive_mode=on
> archive_command='/bin/true'
> wal_keep_segments=512
> max_wal_senders=4
> Series of steps i have followed :
> 1) Enabled password less authentication between master and slave
> 2)created slot on master (assuming it will store wal's regardless of 
other settings)
> 3)started basebackup from slave's end
> 4)Issued checkpoint at master's end
>
> Here my concern is , slave should recover WAL from replication slot but 
why i
> got the above ERROR , Why slot removed  the requested wal file , Could 
you please
> let me know the reason why it happened or did i miss something ?

I guess your base backup took long enough for the required WAL segments to 
be
removed by the time it was done.

To prevent that, create a replication slot *before* you perform 
pg_basebackup
and use the options "-S  -X stream" of pg_basebackup.

You then use the same slot in "recovery.conf".

    That way you cannot lose any WAL.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: SuperUser permission denied when creating table.

2019-07-09 Thread Achilleas Mantzios

On 9/7/19 3:31 μ.μ., Dave Hughes wrote:


Has anyone ran across something like this before?


can you show your pg_dumpall -x --schema-only
?

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: SuperUser permission denied when creating table.

2019-07-09 Thread Achilleas Mantzios

On 9/7/19 4:50 μ.μ., Dave Hughes wrote:

Sure thing.  I attached it as a txt file.  Thanks for the help!

PostgreSQL version?


On Tue, Jul 9, 2019 at 9:29 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/7/19 3:31 μ.μ., Dave Hughes wrote:
>
> Has anyone ran across something like this before?

can you show your pg_dumpall -x --schema-only
?

-- 
    Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: PostgreSQL as a Service

2019-07-18 Thread Achilleas Mantzios

On 18/7/19 5:23 μ.μ., Dirk Riehle wrote:

Hello everyone!

tl;dr: How well is PostgreSQL positioned to serve as the database of choice for 
a DBaaS operator? Specifically, how much open source is (may be) missing?



Im un-lurking hoping to learn more about PostgreSQL in DBaaS land.

You may have seen this announcement.

https://blog.yugabyte.com/why-we-changed-yugabyte-db-licensing-to-100-open-source/

YugaByte bills itself as a PostgreSQL compatible database (yay to at least the intent) but most importantly, it decided to single-license its database under a permissive license, including "the 
enterprise features" that frequently are held back by single-vendor open source firms who want to earn a RoI for their VC investment.


The interesting part (and why I'm posting it here) is the following staging of 
functionality implied in that post.

1. Core database (permissively licensed)
2. Enterprise features (permissively licensed)
3. DBaaS features (trial license, commercial, no open source)
4. Managed by YugaByte (commercial)

Point 3. suggests that they want to make money from self-managed DBaaS, but in the post they also write they really only expect significant income from 4, i.e. YugaByte (the database) managed by 
YugaByte (the company).


Where is PostgreSQL in relation to this?

1. PostgreSQL itself is certainly 1 above, the core database.

2. PostgreSQL permissive license allows commercial offerings to build and not share enterprise features (and I'm sure some companies are holding back). However, PostgreSQL is true community open 
source so whatever enterprise features become relevant, they'll eventually be commoditized and out in the open. Is there a lot that is missing? And that some companies have but are not contributing?


3. So, PostgreSQL as-a-service. There are several companies (plenty?) who service PostgreSQL. I wonder how this is being shared back? I don't have a clear picture here, my impression is that the 
software to run these potentially large farms is proprietary? Or, that operators would argue, this is all configuration and shell scripts and not really shareable open source?


One aspect related to as-a-service is scaling out, i.e. not just having many small customers, but also serving large customers in the cloud. I looked around for scaling out solutions. There used to 
be CitusData (not any longer it seems), there is PostgresXL which seems to be moving slowly. Is that it?

There is also the Bidirectional Replication project (BDR). Also Why the comment 
about CitusData? I would the guess the opposite is true.


4. Managed DBaaS is not relevant here but always a commercial offering.

So, back to my main question above. If I wanted to run a DBaaS shop with only 
PostgreSQL open source, how far away from being able to compete with AWS or 
Azure (or YugaByte for that matter) would I be?

Thanks for any thoughts and opinions! Dirk




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Exclusion constraints on overlapping text arrays?

2019-08-30 Thread Achilleas Mantzios

On 30/8/19 3:42 π.μ., Ken Tanzer wrote:

Hi.  Using 9.6.14, I was setting up a table with this:

EXCLUDE using gist (EXCLUDE using gist (grant_number_codes with &&)

Where grant_numbers is a varchar[].  I get this error:

ERROR:  data type character varying[] has no default operator class for access method 
"gist"
HINT:  You must specify an operator class for the index or define a default 
operator class for the data type.

I did some Googling, and it looks like you could do this for integer arrays with the intarray extension.  I didn't see anything I'd recognize as an equivalent for varchar (or text) arrays.  Is there 
any way to do this now?  And if not, is there much prospect of this being implemented at some point?


I found a couple of old threads about this.  Not sure if they are still 
relevant, but listed below.

Also, on a side note, I tried using grant_number_codes::text[] with &&, but 
that got a syntax error.  Does that mean casting isn't allowed at all in these 
constraints?

Maybe take a look at 
https://dba.stackexchange.com/questions/185174/error-data-type-text-has-no-default-operator-class-for-access-method-gist


Thanks in advance,
Ken

1) This 2014 thread asked about this:

_*array exclusion constraints*_
https://www.postgresql.org/message-id/flat/20141113183843.E8AC620362%40smtp.hushmail.com

and pointed toward this 2013 discussion:

_*Todo item: Support amgettuple() in GIN*_
https://www.postgresql.org/message-id/flat/5297DC17.7000608%40proxel.se



--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://demo.agency-software.org/client/
ken.tan...@agency-software.org <mailto:ken.tan...@agency-software.org>
(253) 245-3801

Subscribe to the mailing list 
<mailto:agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Exclusion constraints on overlapping text arrays?

2019-08-30 Thread Achilleas Mantzios

On 30/8/19 10:20 π.μ., Achilleas Mantzios wrote:

On 30/8/19 3:42 π.μ., Ken Tanzer wrote:

Hi.  Using 9.6.14, I was setting up a table with this:

EXCLUDE using gist (EXCLUDE using gist (grant_number_codes with &&)

Where grant_numbers is a varchar[].  I get this error:

ERROR:  data type character varying[] has no default operator class for access method 
"gist"
HINT:  You must specify an operator class for the index or define a default 
operator class for the data type.

I did some Googling, and it looks like you could do this for integer arrays with the intarray extension.  I didn't see anything I'd recognize as an equivalent for varchar (or text) arrays.  Is 
there any way to do this now?  And if not, is there much prospect of this being implemented at some point?


I found a couple of old threads about this.  Not sure if they are still 
relevant, but listed below.

Also, on a side note, I tried using grant_number_codes::text[] with &&, but 
that got a syntax error.  Does that mean casting isn't allowed at all in these 
constraints?

Maybe take a look at 
https://dba.stackexchange.com/questions/185174/error-data-type-text-has-no-default-operator-class-for-access-method-gist


So you download latest version of smlar from here : 
http://sigaev.ru/git/gitweb.cgi?p=smlar.git
following is from commands given to FreeBSD but you get the point

% tar xvfz smlar-92dc9c7.tar.gz
cd smlar-92dc9c7
gmake (or make in linux)
if it complaints about not finding /contrib/contrib-global.mk then you do
setenv USE_PGXS 1 (export USE_PGXS=1 in linux)
and repeat the make step
sudo make install (again solve problems as above)

when installed successfully then :
psql
create extension smlar;
--and then create your exclude constraint :
alter table your_table_name ADD constraint constrname EXCLUDE USING gist 
(grant_number_codes _text_sml_ops with &&);



Thanks in advance,
Ken

1) This 2014 thread asked about this:

_*array exclusion constraints*_
https://www.postgresql.org/message-id/flat/20141113183843.E8AC620362%40smtp.hushmail.com

and pointed toward this 2013 discussion:

_*Todo item: Support amgettuple() in GIN*_
https://www.postgresql.org/message-id/flat/5297DC17.7000608%40proxel.se



--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://demo.agency-software.org/client/
ken.tan...@agency-software.org <mailto:ken.tan...@agency-software.org>
(253) 245-3801

Subscribe to the mailing list 
<mailto:agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: pgbouncer with ldap

2019-09-09 Thread Achilleas Mantzios

On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:

Christoph Moench-Tegeder wrote:

It has hba and via hba file one can specify ldap connections

https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html

https://pgbouncer.github.io/config.html#hba-file-format
"Auth-method field: Only methods supported by PgBouncer’s auth_type
are supported", and "ldap" is not supported.
When there's no ldap support in pgbouncer, there's no ldap support
in pgbouncer.

To throw in something less tautological:

PgBouncer supports PAM authentication, so if you are on UNIX,
you could use PAM's LDAP module to do what you want.

Right, I had written a blog about it :
https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap

However, I always wished (since my first endeavors with pgbouncer) it was less 
complicated.


Yours,
Laurenz Albe



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: pgbouncer with ldap

2019-09-11 Thread Achilleas Mantzios

On 11/9/19 2:47 μ.μ., Ayub M wrote:

Achilleas, for this setup to work are changes to postgresql.conf and 
pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where 
these files are not accessible.


Those files are needed in any case if you work with postgresql. Unfortunately 
no experience with Aurora. He have been building from source for ages.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> Christoph Moench-Tegeder wrote:
>>> It has hba and via hba file one can specify ldap connections
>>>
>>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
>> https://pgbouncer.github.io/config.html#hba-file-format
>> "Auth-method field: Only methods supported by PgBouncer’s auth_type
>> are supported", and "ldap" is not supported.
>> When there's no ldap support in pgbouncer, there's no ldap support
>> in pgbouncer.
> To throw in something less tautological:
>
> PgBouncer supports PAM authentication, so if you are on UNIX,
> you could use PAM's LDAP module to do what you want.
Right, I had written a blog about it :

https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap

However, I always wished (since my first endeavors with pgbouncer) it was 
less complicated.
>
> Yours,
> Laurenz Albe


-- 
Achilleas Mantzios

IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: pgbouncer with ldap

2019-09-13 Thread Achilleas Mantzios

On 13/9/19 10:19 π.μ., Ayub M wrote:

Stumbled in the first step - PAM authentication via pgbouncer. After compiling 
pgbouncer with the pam plug-in, I am unable to login into the db - throws PAM 
error message. Please help.

User created with the same password as linux user --
localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
db1=> create user testuser password 'hello123';
CREATE ROLE

[ec2-user@ip-1.1.1.1 pam.d]$ psql -h localhost -p 5432 testdb -U testuser
Password for user testuser:
psql: ERROR:  auth failed


ok, pgbouncer should be able to read /etc/pam* files.
Did you miss the
|# chown root:staff ~pgbouncer/pgbouncer-1.9.0/pgbouncer |
|# chmod +s ~pgbouncer/pgbouncer-1.9.0/pgbouncer|
part?



Log entries - pgbouncer.log
2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: testdb/testuser@[::1]:52408 
login attempt: db=testdb user=testuser tls=no
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, parse=86, recv=86
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, parse=0, recv=0
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 14
2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020: 
testdb/testuser@[::1]:52408 read pkt='p' len=14
2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020: 
testdb/testuser@[::1]:52408 pam_auth_begin(): pam_first_taken_slot=1, 
pam_first_free_slot=1
2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): processing slot 1
2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() failed: 
Authentication failure
2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): authorization 
completed, status=3
2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: testdb/testuser@[::1]:52408 
closing because: auth failed (age=0s)
2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020: 
testdb/testuser@[::1]:52408 pooler error: auth failed

Able to login as testuser
[ec2-user@ip-1.1.1.1 pam.d]$ su - testuser
Password:
Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
[testuser@ip-1.1.1.1 ~]$ id
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser) 
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

The user was created as follows
[root@ip-1.1.1.1 ~]# adduser -p hello123 testuser
[root@ip-1.1.1.1 ~]# id testuser
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)

Here is the pgbouncer.ini config
[ec2-user@ip-1.1.1.1 etc]$ less pgbouncer.ini | grep -v '^$' | grep -v '^;'
[databases]
testdb = host=dbhost port=3306 dbname=db1
[users]
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5432
auth_type = pam

Am I missing something? Any permissions?

On Thu, Sep 12, 2019 at 4:54 AM Ayub M mailto:hia...@gmail.com>> wrote:

Okay, thanks for the response. Unfortunately Aurora does not expose these 
files or I should say there is no concept of these files in AWS managed Aurora 
DB service. Anyway I will give a try and
let you know.

On Thu, Sep 12, 2019 at 1:52 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 11/9/19 2:47 μ.μ., Ayub M wrote:

Achilleas, for this setup to work are changes to postgresql.conf and 
pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where 
these files are not accessible.


Those files are needed in any case if you work with postgresql. 
Unfortunately no experience with Aurora. He have been building from source for 
ages.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> Christoph Moench-Tegeder wrote:
>>> It has hba and via hba file one can specify ldap connections
>>>
>>> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html
>> https://pgbouncer.github.io/config.html#hba-file-format
>> "Auth-method field: Only methods supported by PgBouncer’s 
auth_type
>> are supported", and "ldap" is not supported.
>> When there's no ldap support in pgbouncer, there's no ldap 
support
>> in pgbouncer.
> To throw in something less tautological:
>
> PgBouncer supports PAM authentication, so if you are on UNIX,
> you could use PAM's LDAP module to do what you want.
Right, I had written a blog about it :

https://severalnines.com/database-blog/one-security-system-application-connection-pooling-and-postgresql-case-ldap

However, I always wished (since my first endeavors with pgbouncer) 
it was less complicated.
>
> Yours,
    > Laurenz Albe


-- 
Achilleas Mantzios

 

Re: pgbouncer with ldap

2019-09-16 Thread Achilleas Mantzios

Please dont top-post.

On 14/9/19 9:41 π.μ., Ayub M wrote:

Yes I did set that, here is how pgbouncer looks like ---

-rwsrwsr-x. 1 root root 2087504 Sep 13 00:45 pgbouncer




If you had set the same password in the postgresql server for the user and in 
the pgbouncer local unix user it should work.
What are the contents of your /etc/pam.d files?
How do /etc/pam.d/other /etc/pam.d/common-auth /etc/pam.d/common-account look 
like?
How about data/pg_hba.conf ?

Also try to do your tests by tail -f :
* the pgbouncer log
* linux auth.log or equivalent
* the pgsql log
and watch them for every enter you press.

You might have to tweak data/pg_hba.conf as well in order to look for md5 
passwords for this user from the pgbouncer machine.


On Fri, Sep 13, 2019 at 6:50 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 13/9/19 10:19 π.μ., Ayub M wrote:

Stumbled in the first step - PAM authentication via pgbouncer. After 
compiling pgbouncer with the pam plug-in, I am unable to login into the db - 
throws PAM error message. Please help.

User created with the same password as linux user --
localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
db1=> create user testuser password 'hello123';
CREATE ROLE

[ec2-user@ip-1.1.1.1 <mailto:ec2-user@ip-1.1.1.1> pam.d]$ psql -h localhost 
-p 5432 testdb -U testuser
Password for user testuser:
psql: ERROR:  auth failed


ok, pgbouncer should be able to read /etc/pam* files.
Did you miss the
|# chown root:staff ~pgbouncer/pgbouncer-1.9.0/pgbouncer |
|# chmod +s ~pgbouncer/pgbouncer-1.9.0/pgbouncer|
part?



Log entries - pgbouncer.log
2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: 
testdb/testuser@[::1]:52408 login attempt: db=testdb user=testuser tls=no
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, parse=86, 
recv=86
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, parse=0, recv=0
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 14
2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020: 
testdb/testuser@[::1]:52408 read pkt='p' len=14
2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020: 
testdb/testuser@[::1]:52408 pam_auth_begin(): pam_first_taken_slot=1, 
pam_first_free_slot=1
2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): processing slot 
1
2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() failed: 
Authentication failure
2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): authorization 
completed, status=3
2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: 
testdb/testuser@[::1]:52408 closing because: auth failed (age=0s)
2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020: 
testdb/testuser@[::1]:52408 pooler error: auth failed

Able to login as testuser
[ec2-user@ip-1.1.1.1 <mailto:ec2-user@ip-1.1.1.1> pam.d]$ su - testuser
Password:
Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
[testuser@ip-1.1.1.1 <mailto:testuser@ip-1.1.1.1> ~]$ id
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser) 
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

The user was created as follows
[root@ip-1.1.1.1 <mailto:root@ip-1.1.1.1> ~]# adduser -p hello123 testuser
[root@ip-1.1.1.1 <mailto:root@ip-1.1.1.1> ~]# id testuser
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)

Here is the pgbouncer.ini config
[ec2-user@ip-1.1.1.1 <mailto:ec2-user@ip-1.1.1.1> etc]$ less pgbouncer.ini 
| grep -v '^$' | grep -v '^;'
[databases]
testdb = host=dbhost port=3306 dbname=db1
[users]
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5432
auth_type = pam

Am I missing something? Any permissions?

On Thu, Sep 12, 2019 at 4:54 AM Ayub M mailto:hia...@gmail.com>> wrote:

Okay, thanks for the response. Unfortunately Aurora does not expose 
these files or I should say there is no concept of these files in AWS managed 
Aurora DB service. Anyway I will give a try
and let you know.

On Thu, Sep 12, 2019 at 1:52 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 11/9/19 2:47 μ.μ., Ayub M wrote:

Achilleas, for this setup to work are changes to postgresql.conf 
and pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where 
these files are not accessible.


Those files are needed in any case if you work with postgresql. 
Unfortunately no experience with Aurora. He have been building from source for 
ages.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/9/19 12:41 μ.μ., Laurenz A

Re: Web users as database users?

2019-09-20 Thread Achilleas Mantzios

On 20/9/19 2:50 μ.μ., David Gallagher wrote:
Hi - I’m working on a database that will be accessed via a web app. I’m used to a setup where there is one account/role that the app would use to access the database, but in my current scenario I’m 
interested in row level security and auditing enforced by the database. In this scenario, would it make sense to have a user account on the database to mirror the user account from the web app? Is 
that an unusual practice?


I never regretted it. Just place pgbouncer in front and configure it right.


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: pgbouncer with ldap

2019-09-22 Thread Achilleas Mantzios

On 23/9/19 9:10 π.μ., Ayub M wrote:
Achilleas, I am struggling with libpam_ldap package install. Does its setup also involves sssd daemon with nss and pam? I am seeing many different ways of dealing with ldap with pam and different 
modules which I am not familiar with (nscld, sssd etc). If you have any steps documented for installing libpam_ldap kindly share with me.




Hello,
you have to just (for debian based) :
apt install libpam-ldap
did you do that? Did you correctly answered the conf questions? No additional packages needed IIRC. PAM has many modules, module for LDAP is one of them. Module for SSH is another. Read up a little 
bit on the PAM architecture (an old but very successful SUN Microsystems technology) and things will be clearer.

Things to note :
There are two ldap conf files in your /etc :
1) /etc/ldap/ldap.conf : the classic ldap client conf file, used by e.g. 
ldapsearch
2) /etc/ldap.conf : the conf file of libpam-ldap




On Wed, Sep 18, 2019 at 9:48 AM Ayub M mailto:hia...@gmail.com>> wrote:

Will do, thanks!

On Wed, Sep 18, 2019, 4:55 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

Thanx I am really glad I could help! pgbouncer and LDAP is a PITA!

It could be nice if you replied to the list thread just to have an 
update on how it goes.


On 18/9/19 11:23 π.μ., Ayub M wrote:

Hi Achilleas, yes got the pam authentication working. I had to create 
pgbouncer file in pam directory. Now I am working on second part, which is 
integration of pam with ldap. Getting ldap
creds, will soon update you on how it goes thanks for your help. 
Without your article I would have given up on this option..

On Wed, Sep 18, 2019, 1:53 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

Hello

any luck?

On 14/9/19 9:41 π.μ., Ayub M wrote:

Yes I did set that, here is how pgbouncer looks like ---

-rwsrwsr-x. 1 root root 2087504 Sep 13 00:45 pgbouncer


On Fri, Sep 13, 2019 at 6:50 AM Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>> wrote:

On 13/9/19 10:19 π.μ., Ayub M wrote:

Stumbled in the first step - PAM authentication via pgbouncer. 
After compiling pgbouncer with the pam plug-in, I am unable to login into the 
db - throws PAM error message. Please
help.

User created with the same password as linux user --
localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
db1=> create user testuser password 'hello123';
CREATE ROLE

[ec2-user@ip-1.1.1.1 <mailto:ec2-user@ip-1.1.1.1> pam.d]$ psql 
-h localhost -p 5432 testdb -U testuser
Password for user testuser:
psql: ERROR:  auth failed


ok, pgbouncer should be able to read /etc/pam* files.
Did you miss the
|# chown root:staff ~pgbouncer/pgbouncer-1.9.0/pgbouncer |
|# chmod +s ~pgbouncer/pgbouncer-1.9.0/pgbouncer|
part?



Log entries - pgbouncer.log
2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: 
testdb/testuser@[::1]:52408 login attempt: db=testdb user=testuser tls=no
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, 
parse=86, recv=86
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, 
parse=0, recv=0
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 
14
2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020: 
testdb/testuser@[::1]:52408 read pkt='p' len=14
2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020: 
testdb/testuser@[::1]:52408 pam_auth_begin(): pam_first_taken_slot=1, 
pam_first_free_slot=1
2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): 
processing slot 1
2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() 
failed: Authentication failure
2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): 
authorization completed, status=3
2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: 
testdb/testuser@[::1]:52408 closing because: auth failed (age=0s)
2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020: 
testdb/testuser@[::1]:52408 pooler error: auth failed

Able to login as testuser
[ec2-user@ip-1.1.1.1 <mailto:ec2-user@ip-1.1.1.1> pam.d]$ su - 
testuser
Password:
Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
[testuser@ip-1.1.1.1 <mailto:testuser@ip-1.1.1.1> ~]$ id
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser) 
context=unconfined_u:u

Re: pgq is one of the most underrated pg related stuff

2019-09-27 Thread Achilleas Mantzios

On 27/9/19 1:41 μ.μ., Luca Ferrari wrote:

On Wed, Sep 25, 2019 at 12:18 PM Миша Тюрин  wrote:

// another underrated was multicorn. Is it still alive?

Apparently it is <https://github.com/Kozea/Multicorn/commits/master>,
even if it is breathing slow
<https://github.com/Kozea/Multicorn/issues>.

Its kinda sad that we have late 2019 and LDAP foreign data wrapper support is 
no more.


Luca





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Is my lecturer wrong about PostgreSQL? I think he is!

2019-10-10 Thread Achilleas Mantzios



On 10/10/19 11:31 π.μ., Wim Bertels wrote:



3) Were there ever any problems with BSD?

as far as i understand BSD and variants are very solid,
so good for server use, not for desktop


Desktop software might be a little old, but that does not mean that this 
is not usable.


I run FreeBSD desktop for ages.










Re: pgaudit log directory

2019-11-19 Thread Achilleas Mantzios

On 18/11/19 9:56 μ.μ., Dave Hughes wrote:

Hello,
I'm using PostgreSQL 10.5 on Linux (RHEL).  I'm new to administering PostgreSQL and recently installed pgaudit.  I believe I have it installed correctly and wanted to start playing with it to see 
how exactly it works.


So while walking through a tutorial I found online, I saw where I can enter a 
statement in PostgreSQL, such as:
ALTER SYSTEM SET pgaudit.log TO 'read, write';
SELECT pg_reload_conf();
Then after reading or writing to a table, you can then check "pg_log" for the 
audit entries.  But my issue is that I can't find the log file at all?

In my main PostgreSQL directory (/work/PostgreSQL/10)I do have a file called "logfile", but there are no entries from today.  When I go into the pgaudit sub-directory (/work/PostegreSQL/10/pgaudit) 
I don't see any log file in there either?


pgaudit writes in the standard pgsql log.



Can someone point me in the right direction?

Thanks,
Dave Hughes



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-29 Thread Achilleas Mantzios

Hello Dear Postgresql ppl,
I have a table with date ranges and need to express the following constraint : 
allow overlaps only if there is complete containment, e.g.
allow values in rows like :
[2020-01-01,2020-01-31)
[2020-01-02,2020-01-10)
[2020-01-10,2020-01-20)

but disallow rows like

[2020-01-02,2020-01-10)
[2020-01-08,2020-01-11)

I think that writing a new commutative range operator e.g. |<@@> which would return true if the left operand is either contained by or contains the right operand and false otherwise would solve this, 
I am just wondering if there is a more elegant and economical way to express this. (besides writing a trigger which is always an option).|


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-29 Thread Achilleas Mantzios

On 29/1/20 8:43 μ.μ., Michael Lewis wrote:

If the new value does not overlap with any existing, allow. If it does 
overlap, then it must be fully contained by the existing element, or 
the existing element must be fully contained by it. Else, reject. Is 
that right?



Exactly.





Re: Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-30 Thread Achilleas Mantzios

On 29/1/20 8:32 μ.μ., Adrian Klaver wrote:

On 1/29/20 8:12 AM, Achilleas Mantzios wrote:

Hello Dear Postgresql ppl,
I have a table with date ranges and need to express the following constraint : 
allow overlaps only if there is complete containment, e.g.
allow values in rows like :
[2020-01-01,2020-01-31)
[2020-01-02,2020-01-10)
[2020-01-10,2020-01-20)

but disallow rows like

[2020-01-02,2020-01-10)
[2020-01-08,2020-01-11)


I'm missing something. Can you provide a more complete example?

Dear Adrian,
I can give an example, lets say that we want to model the concept of budget, and we allow basic complete budgets covering a specific time period (daterange) which will have predictions and also 
matched actual transactions (Debits, Credits) , but also want "super" budgets of larger time periods which include a number of basic budgets (sub-budgets) and for which only predictions are allowed, 
not actual transactions. We could make the design strict and explicit by using referential constraints (basic budget pointing to a super budget) but If we chose to not make it strict , and conversely 
allow a more dynamic and liberal way that the system detects one form or the other , we could say :

each sub-budget (of the lowest level of the hierarchy - although for the time 
being we have only two levels) cannot overlap with any other sub-budget
each super-budget can only fully contain its sub-budgets , no partial overlap 
allowed.

This could be solved easily if there was a commutative containment operator 
like :
CREATE OR REPLACE FUNCTION range_containment(anyrange, anyrange)
 RETURNS boolean
 LANGUAGE sql
 IMMUTABLE PARALLEL SAFE STRICT
AS $function$
select $1 <@ $2 OR $1 @> $2;
$function$;

so range_containment returns true if the first operand is contained in the 
second or contains the second.

create operator <@@> (PROCEDURE=range_containment, LEFTARG=anyrange, 
RIGHTARG=anyrange, COMMUTATOR = <@@> );

But unfortunately :

alter table bdynacom.acc_budget ADD CONSTRAINT acc_budget_start_end EXCLUDE USING 
gist (daterange(period_start, period_end, '[]'::text) WITH <@@>);
ERROR:  operator <@@>(anyrange,anyrange) is not a member of operator family 
"range_ops"
DETAIL:  The exclusion operator must be related to the index operator class for 
the constraint.

From a small research I did this might mean recompiling the source to make <@@> 
member of range_ops .





I think that writing a new commutative range operator e.g. |<@@> which would return true if the left operand is either contained by or contains the right operand and false otherwise would solve 
this, I am just wondering if there is a more elegant and economical way to express this. (besides writing a trigger which is always an option).|


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt







--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Exclude constraint on ranges : commutative containment : allow only complete containment

2020-01-30 Thread Achilleas Mantzios

On 30/1/20 6:49 μ.μ., Adrian Klaver wrote:

On 1/30/20 3:46 AM, Achilleas Mantzios wrote:

On 29/1/20 8:32 μ.μ., Adrian Klaver wrote:

On 1/29/20 8:12 AM, Achilleas Mantzios wrote:

Hello Dear Postgresql ppl,
I have a table with date ranges and need to express the following constraint : 
allow overlaps only if there is complete containment, e.g.
allow values in rows like :
[2020-01-01,2020-01-31)
[2020-01-02,2020-01-10)
[2020-01-10,2020-01-20)

but disallow rows like

[2020-01-02,2020-01-10)
[2020-01-08,2020-01-11)


I'm missing something. Can you provide a more complete example?

Dear Adrian,
I can give an example, lets say that we want to model the concept of budget, and we allow basic complete budgets covering a specific time period (daterange) which will have predictions and also 
matched actual transactions (Debits, Credits) , but also want "super" budgets of larger time periods which include a number of basic budgets (sub-budgets) and for which only predictions are 
allowed, not actual transactions. We could make the design strict and explicit by using referential constraints (basic budget pointing to a super budget) but If we chose to not make it strict , and 
conversely allow a more dynamic and liberal way that the system detects one form or the other , we could say :

each sub-budget (of the lowest level of the hierarchy - although for the time 
being we have only two levels) cannot overlap with any other sub-budget
each super-budget can only fully contain its sub-budgets , no partial overlap 
allowed.


Um, that makes my head hurt:) Questions:

1) Are the basic complete budgets and the sub-budgets and super budgets 
existing in the same table?

2) Depending on answer to 1, to prevent overlap could you not use a form of the 
example here?:

https://www.postgresql.org/docs/12/rangetypes.html#RANGETYPES-CONSTRAINT

CREATE TABLE reservation (
    during tsrange,
    EXCLUDE USING GIST (during WITH &&)
);


same table.
overlap is allowed but only when it is complete containment, which is the whole 
point of this thread.






This could be solved easily if there was a commutative containment operator 
like :
CREATE OR REPLACE FUNCTION range_containment(anyrange, anyrange)
  RETURNS boolean
  LANGUAGE sql
  IMMUTABLE PARALLEL SAFE STRICT
AS $function$
select $1 <@ $2 OR $1 @> $2;
$function$;

so range_containment returns true if the first operand is contained in the 
second or contains the second.

create operator <@@> (PROCEDURE=range_containment, LEFTARG=anyrange, 
RIGHTARG=anyrange, COMMUTATOR = <@@> );

But unfortunately :

alter table bdynacom.acc_budget ADD CONSTRAINT acc_budget_start_end EXCLUDE USING 
gist (daterange(period_start, period_end, '[]'::text) WITH <@@>);
ERROR:  operator <@@>(anyrange,anyrange) is not a member of operator family 
"range_ops"
DETAIL:  The exclusion operator must be related to the index operator class for 
the constraint.

 From a small research I did this might mean recompiling the source to make 
<@@> member of range_ops .





I think that writing a new commutative range operator e.g. |<@@> which would return true if the left operand is either contained by or contains the right operand and false otherwise would solve 
this, I am just wondering if there is a more elegant and economical way to express this. (besides writing a trigger which is always an option).|


--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt













--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: PL/pgSQL question about EXCEPTION clause & corrupt records

2020-02-14 Thread Achilleas Mantzios

On 14/2/20 2:39 μ.μ., Nick Renders wrote:


Hello,

We recently suffered a database crash which resulted in some corrupt records.

I thought I would write a little PL script that would loop through all the data 
and report any inconsistencies. However, I can't get it to work properly.

For instance, if I run the following statement in pgAdmin:

SELECT * FROM f_gsxws_transaction WHERE gwta_number = 762513

I get the following message:

ERROR:  missing chunk number 0 for toast value 8289525 in pg_toast_5572299


So, as a test, I created a function that would just retrieve that one record:

DECLARE
    rcontent f_gsxws_transaction%ROWTYPE;
BEGIN
    SELECT * INTO rcontent FROM f_gsxws_transaction where gwta_number = 
762513;
    RETURN rcontent;
EXCEPTION WHEN OTHERS THEN
    RAISE NOTICE 'Record 762513 is corrupt';
END;


Now, when I run this function, I have noticed two things:

1) The function has no problem executing the SELECT statement. It is only when "rcontents" is returned, that the function fails. This is a problem, because the ultimate goal is to loop through all 
records and only return/alert something in case of an error.


2) The function never enters the EXCEPTION clause. Instead, when it hits the RETURN command, it breaks and shows the same message as in pgAdmin: missing chunk number 0 for toast value 8289525 in 
pg_toast_5572299.

Does the table have any PKs or UKs?
do something like

FOR vid IN SELECT  FROM f_gsxws_transaction where gwta_number = 762513 
ORDER BY  LOOP
    RAISE NOTICE 'examining row with = %',vid;
    select * into rcontent FROM f_gsxws_transaction where  = vid;
    RAISE NOTICE 'content of row = % , is % ',vid,rcontent;
END LOOP;




Is it possible to check for these kind of errors with a PL script? Or is there 
perhaps a better way to check for corrupt records in a database?

Best regards,

Nick Renders





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Rules versus triggers

2020-03-09 Thread Achilleas Mantzios

On 7/3/20 5:01 μ.μ., Justin wrote:

Yes a rule can rewrite query or replace the query

Read through Depesz  post about rules and the weird side affects that can occurr
https://www.depesz.com/2010/06/15/to-rule-or-not-to-rule-that-is-the-question/


IMHO Rules are cool. It allows for more elegant design (vs doing it otherwise) 
and it results in faster queries. In our system we have currently 61 rules and 
331 triggers.



On Sat, Mar 7, 2020 at 9:57 AM stan mailto:st...@panix.com>> 
wrote:

On Sat, Mar 07, 2020 at 09:47:39AM -0500, Justin wrote:
> Hi Stan
>
> Rules actual are able to rewrite the SQL query sent to postgresql.  Most
> everyone suggestion is avoid rules.
>
> Triggers are just like every other databases Triggers firing off code for
> Insert/Update/Delete/Truncate event
> https://www.postgresql.org/docs/current/sql-createtrigger.html
>

So, the RULE can actually rewrite the query, instead of replacing it,
which is what I m doing in the function, correct?
-- 
"They that would give up essential liberty for temporary safety deserve

neither liberty nor safety."
                                                -- Benjamin Franklin




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: strange locks on PG 11 with Golang programs

2020-03-09 Thread Achilleas Mantzios

On 9/3/20 10:51 π.μ., Josef Machytka wrote:


We are lately experiencing very strange locks on PostgreSQL 11.7 when we 
process ETL tasks using our programs in Go 1.13.8 using standard libraries sql 
and pq.

ETL task has to rename tables but PostgreSQL shows that this simple operation waits for Lock on relation and select from PG wiki (https://wiki.postgresql.org/wiki/Lock_Monitoring) which should show 
blocking process shows that command "alter table  rename to " is blocked by process "COPY  to stdout" but from completely different table.




Fully review your programs for connection / xaction leaks. Do you use a connection pool? Make sure in the code that you close any connections that you open, even after exceptions , try to identify 
idle in transaction connections from pg_stat_activity , remember to either commit or rollback non-autocommit connections (just for the sake of clarity).



And from time to time even selects from some tables seem to by blocked by copy 
commands running on other tables. This is shown by the same select from PG wiki 
for blocking queries.

All this stuff runs from golang programs. So maybe problem is not on PostgreSQL 
but in golang libraries?

Thank you for any suggestions

Josef Machytka

Berlin




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



easy way to acquire height / width from images (PNG, JPEG) stored as bytea?

2020-04-17 Thread Achilleas Mantzios

Hello Dear List,

we have a table holding email attachments as bytea, and we would like to 
filter out images of small dimensions, which are not of any value to our 
logic.


I took a look at pg_image extension, tested it, and it proved 
problematic, it killed my 200+ days uptime FreeBSD box :( . I dropped 
the extension and uninstalled this as soon as fsck finally finished.


So I would like to ask you, basically we have PNGs and JPEGs, is there 
an easy way of parsing their headers and getting info about their 
dimensions?


I could write a C function for that. For PNG it is quite easy but for 
JPEG it gets a little bit complicated, albeit doable, just asking for 
something out of the box. Currently we load images (in our java 
enterprise system) and filter them in Java, but this brings wildfly down 
to its knees pretty easy and quickly.



Thank you and happy Easter.





Re: easy way to acquire height / width from images (PNG, JPEG) stored as bytea?

2020-04-17 Thread Achilleas Mantzios

On 17/4/20 4:09 μ.μ., Adam Brusselback wrote:

Why not extract and store that metadata with the image rather than 
trying to extract it to filter on at query time? That way you can 
index your height and width columns to speed up that filtering if 
necessary.


Yes I thought of that, but those are coming automatically from our mail 
server (via synonym), we have written an alias : a program that parses 
and stores emails. This is generic, I wouldn't like to add specific code 
(or specific columns)  just for image attachments. However I dig the 
idea of the indexes.
You may be able to write a wrapper for a command line tool like 
imagemagic or something so you can call that from a function to 
determine the size if you did want to stick with extracting that at 
query time.
As I describe above, those attachments are nowhere as files. They are 
email attachments. Also we got about half TB of them.


Re: easy way to acquire height / width from images (PNG, JPEG) stored as bytea?

2020-04-17 Thread Achilleas Mantzios

On 17/4/20 5:47 μ.μ., Steve Atkins wrote:



If running an extension crashed your server you should look at how / 
why, especially if it corrupted your filesystem.
That shouldn't happen on a correctly configured system, so the 
underlying issue might cause you other problems. Crashing postgresql, 
sure, but not anything that impacts the rest of the server.


Hello, This machine runs several extensions with no issues (even pljava 
for Christ's sake, our heavy modified version of DBMirror, and lots of 
our own C functions included among others), two bhyve VMs running 
ubuntu, and one jail. + it functions as my workstation as well (wildfly, 
eclipse, etc). And it can run for years, without reboot.


Apparently lousy memory management (consumed all 32GB of RAM + 8GB swap) 
by pg_image didn't crash postgresql but brought the system to its knees. 
Plus this extension was lastly touched in 2013, go figure.



Cheers,
  Steve








Re: easy way to acquire height / width from images (PNG, JPEG) stored as bytea?

2020-04-17 Thread Achilleas Mantzios


On 17/4/20 6:14 μ.μ., Imre Samu wrote:

> it killed my 200+ days uptime FreeBSD box :( .
> As I describe above, those attachments are nowhere as files.
> They are email attachments. Also we got about half TB of them.

it is possible - that  some image is a "decompression bomb" ?

/"Because of the efficient compression method used in Portable Network 
Graphics (PNG) files, a small PNG file can expand tremendously, acting 
as a "decompression bomb". *Malformed PNG chunks can consume a large 
amount of CPU and wall-clock time and large amounts of memory, up to 
all memory available on a system, causing a Denial of Service (DoS).* 
Libpng-1.4.1 has been revised to use less CPU time and memory, and 
provides functions that applications can use to further defend against 
such files."/

https://libpng.sourceforge.io/decompression_bombs.html
https://stackoverflow.com/questions/33602308/how-to-check-png-file-if-its-a-decompression-bomb

Regards,
 Imre



Thank you  a lot Imre. Great info.





Achilleas Mantzios <mailto:ach...@matrix.gatewaynet.com>> ezt írta (időpont: 2020. ápr. 
17., P, 16:39):


On 17/4/20 4:09 μ.μ., Adam Brusselback wrote:


Why not extract and store that metadata with the image rather
than trying to extract it to filter on at query time? That way
you can index your height and width columns to speed up that
filtering if necessary.


Yes I thought of that, but those are coming automatically from our
mail server (via synonym), we have written an alias : a program
that parses and stores emails. This is generic, I wouldn't like to
add specific code (or specific columns) just for image
attachments. However I dig the idea of the indexes.

You may be able to write a wrapper for a command line tool like
imagemagic or something so you can call that from a function to
determine the size if you did want to stick with extracting that
at query time.

As I describe above, those attachments are nowhere as files. They
are email attachments. Also we got about half TB of them.



Re: serie of serie

2020-04-18 Thread Achilleas Mantzios


On 18/4/20 10:18 π.μ., Olivier Leprêtre wrote:


Hi,

I would like to generate a serie of serie and didn’t find how.  With 
generate_series. I can generate a serie


1

1

1

2

2

2

3

3

3

But how can I repeat this serie several times ?

1

1

1

2

2

2

3

3

3

1



with foo as (select floor(a) as a from 
generate_series(1.,4,1.0/3.0) as s(a))

select foo.a from foo, generate_series(1,10) as itera(b) order by b,a;


1

1

2

2

2

3

3

3

1

1

1

2

2

2

3

3

3

Thanks for any help,


 
	Garanti sans virus. www.avast.com 
 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Achilleas Mantzios

On 2/6/20 10:45 μ.μ., Ravi Krishna wrote:

Generally speaking, I discourage having lots of databases under one PG
cluster for exactly these kinds of reasons.  PG's individual clusters
are relatively lightweight, after all.


Plus PG does not directly support cross database queries using 3 part name, 
something
sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.








--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Achilleas Mantzios

On 5/6/20 3:33 μ.μ., Ravi Krishna wrote:

Plus PG does not directly support cross database queries using 3 part name, 
something
sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.

SQLServer has real databases with its own transaction log files.  You can 
restore individual databases in a cluster.
They also have schemas which are not same as users (Oracle treats users and 
schemas same).

Ok, I never meant SQL Server does not have real databases, I meant it handles 
databases as top level schemas.


For security, there is grant connect to the DB and further filtering based on 
schema.

PostgreSQL has stronger protection at connection level, via pg_hba.conf . 
PostgreSQL also supports db-level GRANTs .
In MS SQL server if you need an additional DB for maintenance tasks or to act as an intermediate bridge (like in our case) with write access on it, then automatically you write access to the main 
schema (ok DB in MS SQL terms). (and need further REVOKES to fix security). This (security-wise) is just poor.

So the cross db joins come with a price.


I find SQLServer implementation pretty strong in this.

The only time this can be a problem is when few databases failover in a 
mirrored environment (streaming replication in PG speak).
Then suddenly 3 part names would fail if the remote DB is no longer primary. My 
experience with SQLServer is badly dated. Last
time I worked was SS2008.  I believe in later versions they solved this problem 
by the failover group concept which failovers all
inter-related databases at one go.

BTW Mysql treats databases and schemas as same (that's what it was few years 
ago)




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Oracle vs. PostgreSQL - a comment

2020-06-05 Thread Achilleas Mantzios

On 5/6/20 5:19 μ.μ., Thomas Kellerer wrote:

Achilleas Mantzios schrieb am 05.06.2020 um 14:05:

Plus PG does not directly support cross database queries using 3 part name, 
something
sqlserver excels at.

Maybe because SQL server does not have real databases but schemas instead ?
This sucks security wise.

That is wrong.

SQL Server has both: databases and schemas and it allows for standard compliant 
catalog.schema.table references.

I think you are confusing that with MySQL where a schema and a database are the 
same thing
I think that you are confusing what you think I am confusing. I am talking about our MS SQL installation here at work. Not done by me anyways I am the pgsql guy, but still do most of the query stuff 
in ms sql as well.

I wrote already that my comment was far fetched in a subsequent email.






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Multitenent architecture

2020-06-10 Thread Achilleas Mantzios

On 4/6/20 6:52 μ.μ., Vasu Madhineni wrote:

Hi All,

We are planning a POC on multitenant architecture in Postgres, Could you please 
help us with steps for multitenant using schema for each application model.

Hello,
you may benefit from this blog post :
https://severalnines.com/database-blog/multitenancy-options-postgresql


Thank you so much all.

Regards,
Vasu



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Creating many tables gets logical replication stuck

2020-08-21 Thread Achilleas Mantzios

Dear Laurenz thank you for your analysis and report.

On 21/8/20 4:00 μ.μ., Laurenz Albe wrote:

Reproducer on 12.4:

This is identical problem with this report here : 
https://www.postgresql.org/message-id/6fa054d8-ad14-42a2-8926-5d79c97ecd65%40matrix.gatewaynet.com

Yours,
Laurenz Albe



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: pgbouncer bug?

2020-08-21 Thread Achilleas Mantzios



On 21/8/20 7:56 μ.μ., greigwise wrote:

Not sure if this is the right place to post this, but if not someone please
point me in the right direction.

My issue is with pgbouncer 1.14.   This does not seem to happen on 1.13.

If I do a service pgbouncer restart, then anytime I try to connect to my
databases via pgbouncer, I get ERROR: no such user regardless of what user
I'm using.  It's almost like it's not recognizing the auth_query I have
configured.  But then if I issue a reload, then it seems to work fine and I
no longer get the user not found.  The problem is easy enough to work around
as I don't restart pgbouncer all that much, but it doesn't seem like this is
probably the intended behavior.


You may go here : 
https://github.com/pgbouncer/pgbouncer/commits/pgbouncer_1_14_0


and review all commits between 1.13 and 1.14



Thanks,
Greig Wise

[Full Disclosure: I accidentally posted this message at first to the "base"
PostgreSQL group and am now moving to "PostgreSQL - general".]



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html







Re: Creating many tables gets logical replication stuck

2020-08-25 Thread Achilleas Mantzios

Hello Keisuke
On 25/8/20 9:50 π.μ., Keisuke Kuroda wrote:

Hi All,

There was a similar problem in this discussion:
   Logical decoding CPU-bound w/ large number of tables
   
https://www.postgresql.org/message-id/flat/CAHoiPjzea6N0zuCi%3D%2Bf9v_j94nfsy6y8SU7-%3Dbp4%3D7qw6_i%3DRg%40mail.gmail.com


RelfilenodeMapHash from 1024 entries to 64.

The above changes reduced the performance impact.

However, I think the problem that there are too
many invalidations of RelfilenodeMapHash still remains.
As you report, when many tables are created/dropped/truncated,

Right! IIRC also a massive drop created the very same problem during the latest 
attempts for reproduction but I did not focus on this scenario.

The walsender process can get stuck.




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt





Re: Doubt in pgbouncer

2020-10-02 Thread Achilleas Mantzios

On 2/10/20 9:50 π.μ., Fabio Pardi wrote:

Hi Rama,

On 02/10/2020 01:42, Rama Krishnan wrote:

Hi Friends,

By using pg bouncer can we split read and queries



pgbouncer is just a connection pooler.

The logic where to send the reads and where the writes, should be in our 
application.
yeah reading back some comments over the github on various issues, the pgbouncer hackers were of the philosophy that pgbouncer should remain as query agnostic as possible, in the sense that it should 
not investigate the content of the queries. Those had to do with classic problems like search_path in transaction mode (some PR's were rejected because of that).

You might find this interesting : https://github.com/awslabs/pgbouncer-rr-patch


regards,

fabio pardi



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Can I get some PostgreSQL developer feedback on these five general issues I have with PostgreSQL and its ecosystem?

2020-10-04 Thread Achilleas Mantzios


On 2/10/20 2:08 π.μ., tutilu...@tutanota.com wrote:


On Thu, Sep 24, 2020 at 10:40 PM mailto:tutilu...@tutanota.com>> wrote:


Well not partial as in incremental. Instead dump only some
portion of the schema with or without its associated data.

It's funny that you should bring that up, considering how it
was one of my points... See the point about pg_dump's bug on
Windows.


And you seem to have ignored the fact that one of the core
developers pointed out that it likely isn't a pg_dump bug - if
your terminal is using the same locale as the database, it should
have no difficulty dealing with the characters you are having
trouble with.  It seems likely that you simply need to learn how
to get your terminal set up correctly for it to work.

Yeah, this isn't rude or insulting at all...

Funny how my "incorrectly set up terminal" works perfectly for all 
other programs and my own test scripts, but not for pg_dump 
specifically. And only when using "special" characters. As already 
pointed out multiple times, in great detail, to deaf ears. Very 
interesting how you can manage to twist and bend that into it still 
somehow being "my fault". Because of course it cannot be pg_dump's 
fault. Absolutely not. It is unthinkable. It's the "rude user"'s fault 
who had the audacity to point out yet another PG bug which more than 
likely won't *ever* be fixed, as it's not even recognized, much less 
cared about. Probably because they *want* PostgreSQL to be crippled on 
Windows, judging by the responses in the past and how incredibly 
broken the joke of an installer is.


You should call it "Linux software with minimal pre-alpha Windows 
support" instead of pretending that it's cross-platform, and that goes 
for many FOSS projects as well which think exactly the same as you. 
The fact that I still use this garbage OS (Windows) speaks volumes of 
how incredibly crappy Linux is, which is utterly *unusable*.


But of course I should be grateful no matter what because it doesn't 
cost money. Because my time and energy is worthless. And the 
competition is "even worse", so that means I cannot point out any 
fault, ever, no matter how serious or how easily it could be fixed. I 
should just shut up and thank everyone for insulting me through 
carelessness and words. Or "fix it myself", because that's obviously 
an option as I haven't done it so far...


I did read the rest of your e-mail, but it would be pointless to reply 
to it as you clearly have the mentality that everyone should dedicate 
their lives to configuring a database and buying books instead of 
using it, because everyone should be core developers and everything 
must always be cryptic and difficult and blablabla. I'm sick of this 
attitude, and especially of being called "rude" by such 
rude-beyond-words people.


It would be refreshing to hear your be honest for once and just admit 
that you *want* it to be difficult. You *like* that there's a high 
threshold and it makes you feel superior to exclude "dumb" people who 
can't figure out all these cryptic (and downright broken) things. I 
truly believe that this is the reason for a lot of "weird" things 
which seem to make no sense on the surface.



I'd say take your time, take some deep breaths and decide that's good 
for you. Back in 2004 and after 3 yrs of full production software with 
postgresql someone from the mailing list (he's also in this thread!!) 
called me a "newbie", and I immediately started looking for 
alternatives, only to find out simply that there was no better DB 
software in the market/world back then (and still as we speak). So I 
stayed with PGSQL and wrote what I believe the best non-tcp-ip DB 
replication solution for marine and shipping business (over satellite), 
which still thrives today as far as easiness, automation, completeness, 
correctness and cost are concerned.


+ I discover every day that I am still a newbie, after 20 yrs with 
postgresql. This is not personal, this is about being successful in the 
long run, one should weigh his options and act accordingly. It took 
me/us a long time before we spent a single penny on someone to write or 
fix code that would work for us, but this moment eventually came, there 
is a roof when going with community software. That roof came for us much 
much later since we begun using PostgreSQL. Value for money is so hard 
to beat. I have seen the code by our MS SQL partners , some interesting 
and serious things happening there but when they listen what stock free 
pgsql can do they just freak out (the ones who understand).


So my advice, tell your CEO's the true potential of this technology and 
maybe show them some stats, some results of others, some numbers. Put 
them side by side with the rest of serious solutions and then decide.




What do you guys use for issue tracking, CI/CD and team management? any nice open source options?

2022-04-14 Thread Achilleas Mantzios

Hi All,
not pgsql related, since I could not find an more general list I am writing on 
-general, I hope this is ok.

We are at a phase of restructuring our software infrastructure, and I'd like to 
ask you guys, what do you use for :
a) Issue/bug tracking, we used to have Bugzilla then youtrack, the first one 
seems dead the second one has restriction over the number of users
b) Continuous Integration / Continuous Delivery (currently on TeamCity, not a 
big fan of this one either)
c) Project Management/team mgmt (nothing so far)

We use Git to manage our code base. Ideally all the above should be integrated 
with Git.

What issue/bug tracking is PostgreSQL itself using?
What continuous build system (CI/CD) is PostgreSQL itself using?
Any tool that you ppl or the PostgreSQL infrastructure use that links 
people/committers with bugs/issues, automatically closes issues upon 
commit/push, manages issues/projects/people ?

I know I am asking a lot, any help welcome, thanks!

--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: What do you guys use for issue tracking, CI/CD and team management? any nice open source options?

2022-04-18 Thread Achilleas Mantzios

On 16/4/22 6:39 π.μ., Michael Paquier wrote:

On Thu, Apr 14, 2022 at 06:19:44PM +0300, Achilleas Mantzios wrote:

What issue/bug tracking is PostgreSQL itself using?
What continuous build system (CI/CD) is PostgreSQL itself using?
Any tool that you ppl or the PostgreSQL infrastructure use that
links people/committers with bugs/issues, automatically closes
issues upon commit/push, manages issues/p rojects/people ?

The community does not have a bug tracker, everything is mail-based as
of today.  There is what we call the commit fest app:
https://commitfest.postgresql.org/

This is more a patch tracker for the development of new features
though, still there is a category for published patches that aim at
fixing bugs (see "Bug Fixes").  This does not act as a bug tracker to
be able to see all the live issues reported.

Thank you Michael!

--
Michael



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios

dynacom=# select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value < -100 ORDER BY 1;
id | val | range
+-+---
(0 rows)


select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value::numeric>'-1' ORDER BY 1;
ERROR:  cannot convert infinity to numeric

-- has no problem testing against infinity

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value='inf' ORDER BY 1;
id | val | range
+-+---
(0 rows)

-- still has problem testing the range

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, machdefs 
md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric,'()') @> 
cept.value::numeric ORDER BY 1;
ERROR:  cannot convert infinity to numeric


-- no problem if the query goes into its barrier and the test done outside
with bar as (select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_a
larm::numeric,'()') as range from items it, cept_report cept , dynacom.vessels 
vsl, machdefs md, cept_reportli
mits ceptl where it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid 
AND it.vslwhid=vsl.id AND vsl.vs
lstatus='Active' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 ye
ar'::interval)  ORDER BY 1)
select * from bar where NOT range @> val;
-- good data here

--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt


Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios

Thanks David

Στις 19/7/22 17:19, ο/η David G. Johnston έγραψε:
On Tuesday, July 19, 2022, Achilleas Mantzios 
 wrote:



ERROR:  cannot convert infinity to numeric

-- has no problem testing against infinity

select it.id <http://it.id> ,cept.value::numeric as val,
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels
vsl, machdefs md, cept_reportlimits ceptl wh
ere it.id <http://it.id>=cept.id <http://cept.id> AND
md.defid=ceptl.defid AND it.defid=md.defid AND it.vslwhid=vsl.id
<http://vsl.id> AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value='inf' ORDER BY 1;
id | val | range
+-+---
(0 rows)


The column cept.value contains an infinity.  I see nothing unusual in 
any of these queries given that fact.  If you try to cast the infinity 
to numeric it will fail.  If that doesn’t happen the query won’t fail.


Sorry I must have been dizzy today with so much support.

Yep, there are some infinity in there, but not in this result set.

I think when the casting is in the WHERE filter for some reason some 
subplan uses this filter (and fails). But when this check is applied to 
the result, no infinity is found and works correctly.




David J.


Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios



Στις 19/7/22 17:23, ο/η Tom Lane έγραψε:

"David G. Johnston"  writes:

On Tuesday, July 19, 2022, Achilleas Mantzios 
wrote:

ERROR:  cannot convert infinity to numeric

The column cept.value contains an infinity.  I see nothing unusual in any
of these queries given that fact.  If you try to cast the infinity to
numeric it will fail.  If that doesn’t happen the query won’t fail.

FWIW, PG 14 and later do support infinity in the numeric type.
Yes I noticed that, thank you Tom, I hope we'll be able to upgrade in 
the near future.


regards, tom lane







Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-19 Thread Achilleas Mantzios

Thank you Adrian!

Στις 19/7/22 18:36, ο/η Adrian Klaver έγραψε:

On 7/19/22 03:38, Achilleas Mantzios wrote:

I reformatted queries to see thing better.



-- still has problem testing the range

select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels 
vsl, machdefs md, cept_reportlimits ceptl wh
ere it.id=cept.id AND md.defid=ceptl.defid AND it.defid=md.defid AND 
it.vslwhid=vsl.id AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND 
 numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric,'()') @> 
cept.value::numeric ORDER BY 1;

ERROR:  cannot convert infinity to numeric


SELECT
    it.id,
    cept.value::numeric AS val,
    numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
AS RANGE

FROM
    items it,
    cept_report cept,
    dynacom.vessels vsl,
    machdefs md,
    cept_reportlimits ceptl wh ere it.id = cept.id
    AND md.defid = ceptl.defid
    AND it.defid = md.defid
    AND it.vslwhid = vsl.id
    AND vsl.vslstatus = 'Acti
ve' and md.application = 'Critical Equipment Performance Test' AND
cept.systemdate>= (now()-'1 year'::interval)
AND numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') 
@> cept.value::numeric

ORDER BY
    1;

So the above fails. In your title when you say there is no infinity 
that means the cept.value, ceptl.min_alarm or ceptl.max_alarm  fields 
do not have any '-infinity' or 'infinity' values, correct?

There is infinity in cept.value , just not in this result set.
I got confused and wrongly assumed that since the result set (without 
the filter in the WHERE clause including cept.value::numeric) did not 
contain any infinity it should also work with the filter in the WHERE 
clause. Apparently a subplan executes this conversion in the WHERE 
before the other filters. I did not do any analyze to prove this.





-- no problem if the query goes into its barrier and the test done 
outside
with bar as (select it.id ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_a
larm::numeric,'()') as range from items it, cept_report cept , 
dynacom.vessels vsl, machdefs md, cept_reportli
mits ceptl where it.id=cept.id AND md.defid=ceptl.defid AND 
it.defid=md.defid AND it.vslwhid=vsl.id AND vsl.vs
lstatus='Active' and md.application = 'Critical Equipment Performance 
Test' AND cept.systemdate>= (now()-'1 ye

ar'::interval)  ORDER BY 1)
select * from bar where NOT range @> val;
-- good data here


WITH bar AS (
    SELECT
    it.id,
    cept.value::numeric AS val,
    numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, 
'()') AS

RANGE
    FROM
    items it,
    cept_report cept,
    dynacom.vessels vsl,
    machdefs md,
    cept_reportli mits ceptl
    WHERE
    it.id = cept.id
    AND md.defid = ceptl.defid
    AND it.defid = md.defid
    AND it.vslwhid = vsl.id
    AND vsl.vs lstatus = 'Active'
    AND md.application = 'Critical Equipment Performance Test'
    AND cept.systemdate >= (now() - '1 ye
ar'::interval)
    ORDER BY
    1
)
SELECT
    *
FROM
    bar
WHERE
    NOT RANGE @> val;

This version succeeds, correct? So breaking the range construction 
into one step and the @> test into another works. Though I am not sure 
why  @> changed to NOT  @>?
Yes this succeeds. The correct is with the NOT, it does not change the 
behavior of the initial query.


--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt









Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-20 Thread Achilleas Mantzios

On 19/7/22 20:31, David G. Johnston wrote:

On Tuesday, July 19, 2022, Achilleas Mantzios  
wrote:

Thanks David

Στις 19/7/22 17:19, ο/η David G. Johnston έγραψε:

On Tuesday, July 19, 2022, Achilleas Mantzios 
 wrote:


ERROR:  cannot convert infinity to numeric

-- has no problem testing against infinity

select it.id <http://it.id> ,cept.value::numeric as val, 
numrange(ceptl.min_alarm::numeric,ceptl.max_alarm::numeric
,'()') as range from items it, cept_report cept , dynacom.vessels vsl, 
machdefs md, cept_reportlimits ceptl wh
ere it.id <http://it.id>=cept.id <http://cept.id> AND md.defid=ceptl.defid 
AND it.defid=md.defid AND it.vslwhid=vsl.id <http://vsl.id> AND vsl.vslstatus='Acti
ve' and md.application = 'Critical Equipment Performance Test' AND 
cept.systemdate>= (now()-'1 year'::interval
) AND  cept.value='inf' ORDER BY 1;
id | val | range
+-+---
(0 rows)


The column cept.value contains an infinity.  I see nothing unusual in any 
of these queries given that fact. If you try to cast the infinity to numeric it 
will fail. If that doesn’t happen the
query won’t fail.


Sorry I must have been dizzy today with so much support.

Yep, there are some infinity in there, but not in this result set.

I think when the casting is in the WHERE filter for some reason some 
subplan uses this filter (and fails). But when this check is applied to the 
result, no infinity is found and works correctly.


That is what it means for SQL to be a declarative language, the order of execution/evaluation is determined to be efficient and not what is explicitly written.  You do have some control though, but 
using it also means you might make things worse.


I think you have issues anyway if you are doing equality checks on what seems 
to be a floating point column, regardless of which way you do the cast.

Thank you, will look into it further when I get the time.


David J.




--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt


Re: pgsql 10.19 : "ERROR: cannot convert infinity to numeric" except there is no infinity

2022-07-20 Thread Achilleas Mantzios

On 19/7/22 20:32, Adrian Klaver wrote:

On 7/19/22 10:26 AM, Achilleas Mantzios wrote:

Thank you Adrian!


Actually thank:

https://sqlformat.darold.net/



Στις 19/7/22 18:36, ο/η Adrian Klaver έγραψε:

On 7/19/22 03:38, Achilleas Mantzios wrote:

I reformatted queries to see thing better.






AND numrange(ceptl.min_alarm::numeric, ceptl.max_alarm::numeric, '()') @> 
cept.value::numeric
ORDER BY
    1;

So the above fails. In your title when you say there is no infinity that means 
the cept.value, ceptl.min_alarm or ceptl.max_alarm  fields do not have any 
'-infinity' or 'infinity' values, correct?

There is infinity in cept.value , just not in this result set.
I got confused and wrongly assumed that since the result set (without the filter in the WHERE clause including cept.value::numeric) did not contain any infinity it should also work with the filter 
in the WHERE clause. Apparently a subplan executes this conversion in the WHERE before the other filters. I did not do any analyze to prove this.




Have you tried:

NULLIF(cept.value, 'inf')::numeric

no, cause the CTE version worked. Will keep in mind for similar future problems.



--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt













--
Achilleas Mantzios
DBA, Analyst, IT Lead
IT DEPT
Dynacom Tankers Mgmt





Re: PostgreSQL vs MariaDB

2023-03-25 Thread Achilleas Mantzios

Στις 24/3/23 13:07, ο/η Inzamam Shafiq έγραψε:

Hi Team,

Hope you are doing well.

Can someone please list pros and cons of MariaDB vs PostgreSQL that 
actually needs serious consideration while choosing the right database 
for large OLTP DBs (Terabytes)?




I would say that for small shops and simpler apps MySQL / MariaDB may be 
fine. If you go to enterprise class apps and infrastructure you'll start 
to need more, which means PostgreSQL.



For instance : unnest , transactional DDL, the extensions system, custom 
types, the MVCC etc those are very important features that some 
commercial DBs lack.





Thanks.

Regards,

/Inzamam Shafiq/
/Sr. DBA/


--
Achilleas Mantzios
 IT DEV - HEAD
 IT DEPT
 Dynacom Tankers Mgmt


Re: doc sql-grant.html Synopsis error?

2023-04-08 Thread Achilleas Mantzios

Στις 8/4/23 15:58, ο/η jian he έγραψε:


Hi.
--work as intended.
grant ALL PRIVILEGES on FUNCTION pg_catalog.pg_reload_conf() to test;
grant ALL PRIVILEGES on FUNCTION pg_reload_conf() to test;

-errors. it should be work, or I interpret the doc the 
wrong way?
GRANT ALL PRIVILEGES ON FUNCTION pg_reload_conf() IN SCHEMA pg_catalog 
TO test;
GRANT ALL PRIVILEGES ON FUNCTION pg_catalog.pg_reload_conf() IN SCHEMA 
pg_catalog TO test;


doc: https://www.postgresql.org/docs/current/sql-grant.html
GRANT { EXECUTE | ALL [ PRIVILEGES ] }
 ON { { FUNCTION | PROCEDURE | ROUTINE }/|routine_name|/  [ ( [ 
[/|argmode|/  ] [/|arg_name|/  ]/|arg_type|/  [, ...] ] ) ] [, ...]
  | ALL { FUNCTIONS | PROCEDURES | ROUTINES } IN SCHEMA/|schema_name|/  
[, ...] }
 TO/|role_specification|/  [, ...] [ WITH GRANT OPTION ]
 [ GRANTED BY/|role_specification|/  ]
I am using postgres 16, but the grant function part does not change.
What did I miss?


Hello, You missed the docs. The "IN SCHEMA" version always goes with ALL 
FUNCTIONS in the beginning. What would be the point specifying 
pg_catalog.pg_reload_conf() IN SCHEMA pg_catalog, i.e. giving the SCHEMA 
twice ?


--
Achilleas Mantzios
 IT DEV - HEAD
 IT DEPT
 Dynacom Tankers Mgmt


Re: Guidance on INSERT RETURNING order

2023-04-11 Thread Achilleas Mantzios

Στις 11/4/23 23:06, ο/η Adrian Klaver έγραψε:

On 4/11/23 12:47, Federico wrote:

Hello list,


https://www.sqlite.org/lang_returning.html#limitations_and_caveats


Searching the archive seems that a using the INSERT SELECT ORDER BY
form should be a better solution,
so the above insert should be rewritten as

 INSERT INTO t(data)
 SELECT data FROM (VALUES ('a', 1), ('b', 2), ('c', 3)) as vv(data,
num) ORDER BY num
 RETURNING id


Or

with i as (INSERT INTO t(data) VALUES ('a', 1), ('b', 2), ('c', 3) 
returning id)

select i.id from i order by id;

+1 for this version!



Sorry for the long email,
Thanks

 Federico





--
Achilleas Mantzios
 IT DEV - HEAD
 IT DEPT
 Dynacom Tankers Mgmt





Re: pg_basebackup / recovery

2023-04-12 Thread Achilleas Mantzios

Στις 13/4/23 01:31, ο/η Michael Paquier έγραψε:

On Wed, Apr 12, 2023 at 01:45:56PM +0300, Achilleas Mantzios - cloud wrote:

On 4/12/23 12:32, Fabrice Chapuis wrote:

During recovery process of a self contained backup, how postgres know to
stop reading wal when consistency is reached?

Because it knows the full packup info. It will observe the

STOP WAL LOCATION: 3BC7/4B000130 (file 00023BC7004B)

inside the backup file

There is a bit more to that in the recovery logic, depending mostly
on the presence of backup_ label file in the data folder when recovery
begins.  Once the backup_label is found at the beginning of recovery,
its information is stored in the control file and the file is renamed
to backup_label.old hence stopping the server when recovery has not
reached its expected point would rely on the control file contents
later on.  Then, the startup process and its WAL redo makes sure that
WAL replays until it finds the WAL record marking the end of the
backup.  Grepping for XLOG_BACKUP_END (WAL record type in this case)
shows all the areas that rely on that, and xlogrecovery.c covers the
most relevant bits.


Thank you for the info!

Sorry about my stupid typo "packup" :(


--
Michael


--
Achilleas Mantzios
 IT DEV - HEAD
 IT DEPT
 Dynacom Tankers Mgmt





Re: Copy data from DB2 (Linux) to PG

2018-11-02 Thread Achilleas Mantzios

On 1/11/18 7:27 μ.μ., Ravi Krishna wrote:

I have a project to develop a script/tool to copy data from DB2 to PG.  The 
approach I am thinking is

1. Export data from db2 in a text file, with, say pipe as delimiter.
2. Load the data from the text file to PG using COPY command.

In order to make it faster I can parallelize export and load with upto X number 
of tables concurrently.

Is there a tool in which I can avoid first exporting and then loading.  I think 
one way I can do it is by
using pipes by which I can export from db2 to a pipe and simultaneously load it 
to PG using COPY.

https://pgloader.io/


Any other tool for this job?

thanks.




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Logical replication hangs up.

2018-11-05 Thread Achilleas Mantzios
  len (rec/tot): 24/    24, tx:  0, lsn:
426/AAE55BB8, prev 426/AAE55B80, desc: SWITCH


I've also did the same for sent_lsn:

select pg_walfile_name('429/69E9CC60');

+--+
| pg_walfile_name  |
+--+
| 000104290069 |
+--+

/usr/lib/postgresql/10/bin/pg_waldump -s 429/69E9CC60
000104290069 | less
rmgr: Heap    len (rec/tot): 76/    76, tx:    5536495, lsn:
429/69E9CC60, prev 429/69E9CC10, desc: DELETE off 81 KEYS_UPDATED ,
blkref #0: rel 1663/16421/38572 blk 40
rmgr: Heap    len (rec/tot): 76/    76, tx:    5536495, lsn:
429/69E9CCB0, prev 429/69E9CC60, desc: DELETE off 82 KEYS_UPDATED ,
blkref #0: rel 1663/16421/38572 blk 40
rmgr: Heap    len (rec/tot): 76/    76, tx:    5536495, lsn:
429/69E9CD00, prev 429/69E9CCB0, desc: DELETE off 83 KEYS_UPDATED ,
blkref #0: rel 1663/16421/38572 blk 40

The relation 38572 from list above is an ordinary non replicated
table.

Any help or advice how to diagnose/get it working is highly
appreciated.

Kind regards Ales Zeleny





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: LOG: incomplete startup packet

2018-11-08 Thread Achilleas Mantzios

On 8/11/18 11:19 π.μ., Pavel Demidov wrote:

Hello,

Time to time found in postgres.log the following message

01:35:22.608  LOG: incomplete startup packet
01:35:27.147  LOG: incomplete startup packet
01:35:52.593  LOG: incomplete startup packet
01:35:57.146  LOG: incomplete startup packet

01:36:22.596  LOG: incomplete startup packet
01:36:27.146  LOG: incomplete startup packet
01:36:52.593  LOG: incomplete startup packet
01:36:57.145  LOG: incomplete startup packet

Are any way exists how to identify the source. What need to collect and what 
need to see.

Just try enable log_connections and the previous line should tell you the IP.


Best Regards
Paul



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




pgconf eu 2018 slides entry missing from https://wiki.postgresql.org/wiki/PostgreSQL_Related_Slides_and_Presentations

2018-11-18 Thread Achilleas Mantzios

Hello,

The pgconf eu 2018 entry is missing from 
https://wiki.postgresql.org/wiki/PostgreSQL_Related_Slides_and_Presentations 
.


I am afraid the more time passes since the event the harder to get 
people post their slides there, so pls update this ASAP.





Re: PostgreSQL MVCC and alternative REDO implementation - my thoughts

2018-11-18 Thread Achilleas Mantzios



On 18/11/18 2:04 μ.μ., Gerhard Wiesinger wrote:

On 18.11.2018 12:14, Martín Marqués wrote:

El 18/11/18 a las 04:49, Gerhard Wiesinger escribió:

Hello,

I just read that blog article and I think switching to REDO logic is 
not

a good idea.

I think you got it the other way around. Postgres implementes REDO, that
is, move forward by REDOing the changes which are stored in WAL files

BTW, could you point us to the blog you read this?

Maybe you got REDO and UNDO mixed up.



Yes, I ment UNDO ...

Link is here:

https://severalnines.com/blog/five-cool-things-i-learned-postgresql-conference-europe-2018 



sunday morning, too early :-)



In this blog I had the role of the reporter for the event :)

When and if this makes it to the core, both heaps will be supported. 
Nevertheless this sounds exciting feature to have, however lots of work 
must be done before PostgreSQL is able to adopt this in an elegant manner.





Ciao,

Gerhard







Re: pgconf eu 2018 slides entry missing from https://wiki.postgresql.org/wiki/PostgreSQL_Related_Slides_and_Presentations

2018-11-18 Thread Achilleas Mantzios

On 19/11/18 8:48 π.μ., Charles Clavadetscher wrote:

Hello Achilleas


-Original Message-
From: Achilleas Mantzios 
Sent: Sonntag, 18. November 2018 12:23
To: pgsql-gene...@postgresql.org
Subject: pgconf eu 2018 slides entry missing from
https://wiki.postgresql.org/wiki/PostgreSQL_Related_Slides_and_Presentatio
ns

Hello,

The pgconf eu 2018 entry is missing from
https://wiki.postgresql.org/wiki/PostgreSQL_Related_Slides_and_Presentatio
ns
.

I am afraid the more time passes since the event the harder to get
people post their slides there, so pls update this ASAP.


[>] I made an entry in the wiki:

https://wiki.postgresql.org/wiki/PostgreSQL_Conference_Europe_Talks_2018

As mentioned there, the slides are linked, as long as they have been delivered 
by the speakers, in the talk descriptions in the schedule.


Thank you! Lets hope the list fills up!



Bye
Charles






--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: pgconf eu 2018 slides entry missing from https://wiki.postgresql.org/wiki/PostgreSQL_Related_Slides_and_Presentations

2018-11-19 Thread Achilleas Mantzios

On 19/11/18 3:27 μ.μ., Stephen Frost wrote:

Greetings,

* Charles Clavadetscher (clavadetsc...@swisspug.org) wrote:

https://wiki.postgresql.org/wiki/PostgreSQL_Conference_Europe_Talks_2018

As mentioned there, the slides are linked, as long as they have been delivered 
by the speakers, in the talk descriptions in the schedule.

I'm not sure what the point of duplicating that information on to the
wiki is..?



It is nice to have a single source to all slides from all conferences, so at 
least a link to the conference page should be there on the wiki.




Seems like what we should do here is add a feature to the conference
system where a particular URL will produce a list of talks with links
to slides, if uploaded.  That shouldn't be hard to do if someone is
interested and knows a little python/django- the code is here:

https://git.postgresql.org/gitweb/?p=pgeu-website.git;a=summary

Patches very much accepted. :)

Thanks!

Stephen



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Reg: Query for DB growth size

2018-11-29 Thread Achilleas Mantzios

Hello,

On 29/11/18 10:15 π.μ., R.R. PRAVEEN RAJA wrote:

Hi Andreas,

I hope you did not understand my question. I am not asking query for current database size. I am asking for database growth, i mean predicting the increase in database size over a period like a 
month just like its available in oracle. For oracle you can see the below link.


http://www.dba-oracle.com/t_database_growth_reports.htm


A huge list of monitoring solutions for Unix/Linux/PostgreSQL. The hot new 
trend is Prometheus, also you could checkout nagios, icinga, munin, zabbix lots 
and lots of options there.
You might also try a commercial offering (like e.g. Several Nines), they have 
some pretty graphs, alarms, alerts of all kinds.
And last but not least, you could have your own monitoring table where you keep 
periodic snapshots of the live metrics.
You may read here for an overview : 
https://severalnines.com/blog/key-things-monitor-postgresql-analyzing-your-workload




 Thanks in advance.

Regards,
PraveenRaja


*From:* Andreas Joseph Krogh 
*Sent:* Thursday, November 29, 2018 1:10 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* Sv: Reg: Query for DB growth size
På torsdag 29. november 2018 kl. 07:41:24, skrev R.R. PRAVEEN RAJA 
mailto:rrpraveenr...@live.com>>:

Hi All,
Can i get the query or steps for how to check the Database growth in 
postgres. Thanks in advance.

select pg_size_pretty(pg_database_size(current_database()));
--
*Andreas Joseph Krogh*



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Transition Tables doesn´t have OID

2018-12-01 Thread Achilleas Mantzios



On 1/12/18 6:51 μ.μ., PegoraroF10 wrote:

ok
then, help me to find alternatives to it.
As I sad, sometimes whe change our PK, so using OID would be a smart way to
have a join between old and new transition tables and we would like to use
transition tables because each statement is a lot faster than each row for
auditing purposes.


Unique key with a sequence, which you'll have to leave alone.



So, whats the alternative ? One trigger for each row just for changing PK
values (will occur few times) and another trigger for each statement to do
the logging ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





Re: Database corruption in cascaded replica, "pg_xact/003A" doesn't exist, reading as zeroes"

2018-12-05 Thread Achilleas Mantzios

On 5/12/18 4:10 μ.μ., Dejan Petrovic wrote:

I believe this is a result of my "broken" procedure for setting up a cascaded 
replica. I would love to know where the issue is.

This is a report of a database corruption which was detected after promoting a replica server to a master server. We suspect the actual corruption occurred during replication of a two-level cascaded 
replica, however it was not detected until after it was promoted to master server when pg_dump and pg_restore failed. After re-checking postgresql logs on the corrupted node, we found this log line, 
which was the only indicator of a possible issue, but was overlooked: "2018-07-25 13:14:42 UTCLOG: file "pg_xact/003A" doesn't exist, reading as zeroes".


After extensive investigation, we detected three different cases of data 
corruption:
1.)Issue with pg_toast - happens during pg_dump or when row with corrupted data 
is selected
"pg_dump: Dumping the contents of table failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: missing chunk number 2 for toast value 
86123528 in pg_toast_41291242"

2.) Issue with uncommitted transactions, occurs when running VACUUM or 
pg_dump/pg_restore:
"ERROR: uncommitted xmin 60817551 from before xid cutoff 61487222 needs to be 
frozen
SQL state: XX001"

3.) 2 duplicated rows in different tables, violated PK unique constraint, 
occurs during pg_restore

This is how we fixed the three issues:
1.) Using bisection and SELECT, we found the row/column with corrupted data. We 
nulled the corrupt column and deleted the row (it was not critical, had no FKs)
2.) We ran VACUUM on all tables to check where this issue occured and updated all rows between reported xmin and xid cutoff with same data - this generated a new xmin on the broken rows, which fixed 
the issue.

3.) We removed the duplicated rows in a transaction, disabled all triggers 
before and enabled them right after
4.) Our final step is to do a full dump and restore on master DB so that the DB 
is rebuilt

This is the procedure we used to replicate the cascaded replica (master -> replica 
-> replica)

Notes:
Machines are running on Centos 7, Postgresql 10.2
DB-1 = master
DB-2 = replica of DB-1
DB-3 = replica of DB-2

--> DB-3 (new cascaded replica)
systemctl disable postgresql-10
systemctl stop postgresql-10
--> DB-1 (master)
psql -U postgres
select pg_start_backup('clone',true);
<- DB-3 (cascaded replica)
rsync -azv -e --delete DB-2:/isl/pgsql10/data /isl/pgsql10/ --exclude pg_wal 
--exclude postgresql.pid
assert rsync finished
vi recovery.conf
    standby_mode = 'on'
    primary_conninfo = 'host=DB-2'
    recovery_target_timeline = 'latest'


In addition to what Stephen wrote you, from : 
https://www.postgresql.org/docs/10/continuous-archiving.html
the low level backup method is relying on :
"1. Ensure that WAL archiving is enabled and working."
Have you setup this correctly?

Also there should at least be a restore_command inside your recovery.conf . Otherwise how do you know that DB-2 has the WALs you need? Whereas with your WALs archived it is guaranteed that the backup 
is consistent.


But anyway, its better to either use pg_basebackup or invest in some higher 
level tool like Stephen said.


-> DB-1
select pg_stop_backup();
<- DB-3
rsync -azv -e --delete DB-2:/isl/pgsql10/data/pg_wal /isl/pgsql10/data/
systemctl start postgresql-10
systemctl enable postgresql-10
tail log file

Any comments regarding the cascaded replication procedure or database 
corruption detection or resolution are welcome.

Best regards,

Dejan Petrovic
ISL Online





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: finding out what's generating WALs

2018-12-11 Thread Achilleas Mantzios

On 11/12/18 4:00 μ.μ., Chris Withers wrote:

Hi All,

With a 9.4 cluster, what's the best way to find out what's generating the most 
WAL?

I'm looking after a multi-tenant PG 9.4 cluster, and we've started getting 
alerts for the number of WALs on the server.
It'd be great to understand what's generating all that WAL and what's likely to 
be causing any problems.\



One way is to keep snapshots of pg_stat_user_tables and then try to identify 
spikes based on the various _tup fields.
Another way is to take a look in your archive (where you keep your archived wals), try to identify a period where excessive wals were generated and then use 
https://www.postgresql.org/docs/11/pgwaldump.html to see what's in there.


More generally, what's number of WALs is "too much"? check_postgres.pl when used in nagios format only appears to be able to alert on absolute thresholds, does this always make sense? What's a good 
threshold to alert on?




Regarding you wals in pg_wal,  a good threshold could be anything more than a 
e.g. 10% increase from wal_keep_segments with a trend to go up. If this number 
goes up chances are something bad is happening.


cheers,

Chris




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: why would postgres be throttling a streaming replication slot's sending?

2018-12-13 Thread Achilleas Mantzios

On 12/12/18 3:45 μ.μ., Chris Withers wrote:

On 11/12/2018 14:48, Achilleas Mantzios wrote:

On 11/12/18 4:00 μ.μ., Chris Withers wrote:


I'm looking after a multi-tenant PG 9.4 cluster, and we've started getting 
alerts for the number of WALs on the server.
It'd be great to understand what's generating all that WAL and what's likely to 
be causing any problems.\


Regarding you wals in pg_wal,  a good threshold could be anything more than a e.g. 10% increase from wal_keep_segments with a trend to go up. If this number goes up chances are something bad is 
happening.


wal_keep_segments is 0 here, so I went hunting in pg_replication_slots and 
found that it's the barman slot that's behind on the WAL.



Then your number WALs depends on the traffic generated after the previous checkpoint, the status of archiving and IMO the status of replication slots. The doc : 
https://www.postgresql.org/docs/11/runtime-config-replication.html doesn't specifically imply this last one, but it is documented here : https://www.postgresql.org/docs/10/wal-configuration.html , and 
it has happened to me. If a replication client with replication_slot dies, WALs accumulate till it wakes up again. (case in pgsql 10)



strace'ing on the barman receiving side shows it waiting on the socket, so no 
problem on that side.

on the sending side, it's a little confusing, since the server is basically idle, no cpu or disk activity, and yet strace'ing the sending process attached to the other end of the socket shows time 
being spent waiting on a poll which, while it includes the socket being sent to, also includes a bunch of pipes. I've attached a chunk of the output below in the hope that someone on this list could 
offer an explanation as to what might cause the WAL to be trickling over this port rather than catching up as fast as it can?


pg_stat_replication is what you should use for monitoring. IIRC you wont see 
some cols in the row of the barman.
Also you could query pg_replication_slots as :

select pg_wal_lsn_diff(pg_current_wal_lsn(),confirmed_flush_lsn),pg_wal_lsn_diff(pg_current_wal_lsn(),restart_lsn) from pg_replication_slots where active_pid=pg_stat_replication>;

 pg_wal_lsn_diff | pg_wal_lsn_diff
-+-
 312 | 1917736
(1 row)

In the first column you should see diff values (in bytes) close to zero. In the second you should see the length (window) of your replication slot in bytes, which means that this converted to 16MB 
should indicate an excess of the number of wals due to the replication slot in addition to the other factors (archiving, checkpoints). In my system it never reaches the size of a single wal.




cheers,

Chris

strace output for the streaming_barman slot servicing the barman replication 
slot:


IMO no need to go deep to gdb and stack traces unless you are sure there is a 
problem further down.



--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=3577, si_uid=26} ---
write(4, "\0", 1)   = 1
rt_sigreturn({mask=[]}) = -1 EINTR (Interrupted system call)
read(3, "\0", 16)   = 1
read(6, 0x7fffdcd7, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
recvfrom(10, 0x7fffdcaf, 1, 0, NULL, NULL) = -1 EAGAIN (Resource 
temporarily unavailable)
read(5, 
"\3\1\0\0\7\326\355\2\343\0\0\0@\n\0\0\0\17\1\230\313\4\0\0)\347\372l\0\0\0\0"...,
 4272) = 4272
sendto(10, "d\0\0\20\315w\0\0\4\313\230\1\17P\0\0\4\313\230\1 
\0\0\2\37\321\343\345\306\20\3\1"..., 4302, 0, NULL, 0) = 4302
read(3, 0x7fffdc90, 16) = -1 EAGAIN (Resource temporarily 
unavailable)
poll([{fd=10, events=POLLIN}, {fd=3, events=POLLIN}, {fd=6, events=POLLIN}], 3, 
26076) = ? ERESTART_RESTARTBLOCK (Interrupted by signal)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=3577, si_uid=26} ---
write(4, "\0", 1)   = 1
rt_sigreturn({mask=[]}) = -1 EINTR (Interrupted system call)
read(3, "\0", 16)   = 1
read(6, 0x7fffdcd7, 1)  = -1 EAGAIN (Resource temporarily 
unavailable)
recvfrom(10, 0x7fffdcaf, 1, 0, NULL, NULL) = -1 EAGAIN (Resource 
temporarily unavailable)
read(5, "~\320\5\0\3\0\0\0\0 
\1\230\313\4\0\0\3\0\0\0\0\0\0\0\323\316\314\0\0\0\0\0"..., 8192) = 8192
sendto(10, "d\0\0 \35w\0\0\4\313\230\1 
\0\0\0\4\313\230\1@\0\0\2\37\321\343\361\376\226~\320"..., 8222, 0, NULL, 0) = 8222
read(3, 0x7fffdc90, 16) = -1 EAGAIN (Resource temporarily 
unavailable)
poll([{fd=10, events=POLLIN}, {fd=3, events=POLLIN}, {fd=6, events=POLLIN}], 3, 
25275) = ? ERESTART_RESTARTBLOCK (Interrupted by signal)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_USER, si_pid=3577, si_uid=26} ---

Interesting numbers:
- process 3577 is the wal writer process
- fd 10 is the socket, 3 and 6 are pipes
- time is visib

Re: logical replication resiliency

2018-12-30 Thread Achilleas Mantzios

On 23/12/18 3:39 μ.μ., Istvan Soos wrote:

I have a database with about 1 TB of data, mostly inserts, little
updates and deletes, and I want to setup 2 logical replication for it.
What should be my expectation with the replication?

1. What is the time-window that either of the nodes can be offline?
- Will it survive if the node doing the publishing is offline for a
few hours and then goes back online?

No problem with that.

- Will it survive if the subscriber node is offline for a few hours
and then goes back online?
There is a problem with that. Due to the replication slot, the subscriber will start keeping WALs waiting for the subscriber to wake up. It depends on the amount of WAL produced, peak time or not, and 
any other special situations (business) that could influence the amount of writes to the DB. You can easily find out by looking at your archive. Note that once the space dedicated for WALs is filled 
up, your publisher node will PANIC.


2. If the replication is aborted for whatever reason, is there a way
to do it incrementally, or will it re-sync the 1TB at every such
event?

It is done incrementally, provided there is WAL continuity.


3. What alternative tools are out there to e.g. periodically sync from
one db to another?
PgSQL replication solutions in various forms, sizes and shapes are around since 2001. So plenty of tools out there + your custom scripts that you can always write. But since logical replication is in 
the core (starting from pgsql 10.0) I see no reason not to use it.


You can have a look at an article I wrote here : 
https://severalnines.com/blog/using-postgresql-logical-replication-maintain-always-date-readwrite-test-server



Thanks,
   Istvan




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Thoughts on row-level security for webapps?

2019-01-02 Thread Achilleas Mantzios

On 31/12/18 6:57 μ.μ., Siegfried Bilstein wrote:

Hi all,

I'm evaluating using a tool called Postgraphile that generates a GraphSQL server from a postgres setup. The recommended way of handling security is to implement RLS within postgres and simply have 
the webserver take a cookie or similar and define which user is querying data.


I've normally built webapps like this: pull out user id from a session cookie -> 
the API endpoint verifies the user and whether or not it has access to the given data 
-> app code mutates the data.

With Postgraphile the request specifies the mutation and the server processes 
the request and relies on Postgres to determine if the user has correct access 
rights.

It seems like I would need to create a ROLE for every single member that signs 
up for my website which I'm a little concerned about.


Why?


Is this a common usage pattern for SQL security? Any gotchas relying on RLS?

--
Siggy Bilstein
CTO ofAyuda Care <https://www.ayudacare.com>
Book some time <https://calendly.com/siggy-cto> with me!



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Use bytearray for blobs or not?

2019-01-04 Thread Achilleas Mantzios

On 4/1/19 1:41 μ.μ., Thomas Güttler wrote:

Some months ago I wrote a little application with Python+Django which stores
blob data in bytearrays.

It works.

In the future there will be a lot more traffic, and I am unsure
if this is really a good solution. I know this is more FUD than
concrete PG issue.

What do you think?


Performance (at least for JDBC) is known to be better with blobs.
However, with bytea life is just easier for many reasons (backups, logical 
replication, other types of replication, sys management, etc).



Which alternatives could be useful?

Regards,
  Thomas







--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Use bytearray for blobs or not?

2019-01-06 Thread Achilleas Mantzios

On 4/1/19 6:02 μ.μ., Rob Sargent wrote:


On 1/4/19 4:48 AM, Achilleas Mantzios wrote:

On 4/1/19 1:41 μ.μ., Thomas Güttler wrote:

Some months ago I wrote a little application with Python+Django which stores
blob data in bytearrays.

It works.

In the future there will be a lot more traffic, and I am unsure
if this is really a good solution. I know this is more FUD than
concrete PG issue.

What do you think?


Performance (at least for JDBC) is known to be better with blobs.


Do you have any details on which part is slower with bytea? Original insert, read back to client, general (p)sql access.  I'm moving towards bytea but still have time to change my mind if I cannot 
afford the performance hit.

No since I never used them (blobs) in production. Basically the java driver 
does not support streaming. So for very large data (e.g. videos, movies, etc) 
people use blobs.



However, with bytea life is just easier for many reasons (backups, logical 
replication, other types of replication, sys management, etc).

Yes, and in my case I even get a cheap compression from the original file.




Which alternatives could be useful?

Regards,
  Thomas












--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: the installation of pgadmin4 makes me weep in frustration

2019-01-14 Thread Achilleas Mantzios

On 14/1/19 9:58 π.μ., robert wrote:


Hi There


first, thanks a lot for the great an beautiful software we get from PostgreSQL 
and all people around it.

But I wonder how it comes, that installing pgadmin4 is so incredibly hard?

And no documentation.

I would like to install pgadmin4 to my ubuntu 18.1 laptop.

Where do I find doku on how to install pgadmin4. Preferably I would like to 
install it using pip?


it's no longer a standalone app, but rather a web app, follow the README's in 
the source. At least that was what I did, and I agree that it was not straight 
forward.



thanks

robert

--
Robert Rottermann CEO

031 333 10 20
rob...@redo2oo.ch <mailto:rob...@redo2oo.ch>
Sickingerstrasse 3, 3014 Bern <http://maps.apple.com/?q=Sickingerstrasse 3,3014 
Bern>

https://Redo2oo.ch <https://redo2oo.ch>

*Ihr Partner wenn es um ERP Lösungen geht.*




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Achilleas Mantzios

Dobro jutro

On 15/1/19 10:39 π.μ., Mihalidesová Jana wrote:


Hi,

We try to migrate from oracle to postgres using ora2pg but we hit some weird 
behavior of bytea. Or it’s just our ignorance.

Table migration were ok, but we are not able to read bytea data. What we did 
wrong.



Assuming your data are jpeg images,  try smth like :
psql -c "copy (select encode(serializable_value,'hex') from alf_node_properties_zaloha 
where serializable_value is not null LIMIT 1) to stdout" | xxd -p -r > foo.jpg

then use some program (gwenview) to see the image.

If you want to dump all images (assuming again we are dealing with images 
here), then loop over each row and do the same as above.

But it is essential that you know what those values represent. Are they pdf 
docs? images? binaries? octet-streams?


Thank you for your help,




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Barman versus pgBackRest

2019-01-17 Thread Achilleas Mantzios

Hello,

One strong point of barman IMHO is transparently converting an incremental backup to a full backup for retention purposes, so retention specification is far more liberal than with pgbackrest, and 
configuring for incremental backup does not pose any limitations to the schedule of backups.
In our environment our net connection to the remote site (repo) is extremely lousy, (although within Switzerland if that makes any difference), so with pgbackrest a full backup of our 1.3TB db, would 
take about 2 days ,lets say set in cron weekly on sunday, (Sunday 1:00->Tuesday),  then I would have to take incr backups from Wednesday->Saturday. And we would have to also take a full backup next 
Sunday.

With pgbarman we had to set :
reuse_backup = link
retention_policy = RECOVERY WINDOW OF 14 DAYS
and just perform regular backups. So for barman every backup can be used as a 
base for the next backup, which achieves fast backups and reduced disk space.
In pgbackrest one has to be explicit about the retention of both full backups 
and diff backups.
it would be nice in pgbackrest to only have incremental backups and let the 
system do the necessary management (converting incremental to full) 
transparently and asynchronously, e.g.via cron.
I have read about the --repo-hardlink option.
"
This gives the appearance that each backup is a full backup at the file-system 
level
"
So could we just take a first full backup and then switch permanently to incr 
backups?

--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: Barman versus pgBackRest

2019-01-17 Thread Achilleas Mantzios

On 17/1/19 4:12 μ.μ., Jehan-Guillaume (ioguix) de Rorthais wrote:

On Thu, 17 Jan 2019 13:09:18 +0200
Achilleas Mantzios  wrote:


Hello,

One strong point of barman IMHO is transparently converting an incremental
backup to a full backup for retention purposes, so retention specification is
far more liberal than with pgbackrest, and configuring for incremental backup
does not pose any limitations to the schedule of backups. In our environment
our net connection to the remote site (repo) is extremely lousy, (although
within Switzerland if that makes any difference), so with pgbackrest a full
backup of our 1.3TB db, would take about 2 days ,lets say set in cron weekly
on sunday, (Sunday 1:00->Tuesday),  then I would have to take incr backups
from Wednesday->Saturday. And we would have to also take a full backup next
Sunday. With pgbarman we had to set : reuse_backup = link retention_policy =
RECOVERY WINDOW OF 14 DAYS and just perform regular backups. So for barman
every backup can be used as a base for the next backup, which achieves fast
backups and reduced disk space. In pgbackrest one has to be explicit about
the retention of both full backups and diff backups. it would be nice in
pgbackrest to only have incremental backups and let the system do the
necessary management (converting incremental to full) transparently and
asynchronously, e.g.via cron. I have read about the --repo-hardlink option. "
This gives the appearance that each backup is a full backup at the
file-system level " So could we just take a first full backup and then switch
permanently to incr backups?

Funnily enough, I opened an issue a month ago about this feature. If you want
to join the brainstorming, discuss, add some opinions and thoughts, see
https://github.com/pgbackrest/pgbackrest/issues/644

Surely I enjoyed your convo there!





--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: Upgrading from 9.2.15 to 9.4.20

2019-01-18 Thread Achilleas Mantzios

On 18/1/19 12:30 μ.μ., Michal wrote:

Im trying to upgrade PostgreSQL from 9.2.15 to 9.4.20
with this :
  /usr/pgsql-9.4/bin/pg_upgrade --old-bindir=/usr/bin
--new-bindir=/usr/pgsql-9.4/bin --old-datadir=/var/lib/pgsql/  data
--new-datadir=/var/lib/pgsql/9.4/data

Can anybody help me with this? I also try -p port parameter not helped. Both
databse instances are shut down.

that give me this output :

Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
 Is the server running locally and accepting
 connections on Unix domain socket "/run/postgresql/.s.PGSQL.50432"?


could not connect to old postmaster started with the command:
"/usr/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/data" -o
"-p 50432 -b  -c listen_addresses='' -c unix_so  cket_permissions=0700 -c
unix_socket_directory='/run/postgresql'" start
Failure, exiting

tweak your .pgpass a little bit, I remember from 9.3 to 10 I had to define :
/var/lib/pgsql:50432:*:postgres:

or just make local UNIX connections trusted in pg_hba.conf
also try to start the old postmaster by hand using the command shown and look 
to verify that you can locate the socket under /run/postgresql/.s.PGSQL.50432

make sure that the dir is there as well.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: PostgreSQL logical replication depends on WAL segments?

2019-01-22 Thread Achilleas Mantzios

On 22/1/19 3:18 μ.μ., Josef Machytka wrote:

Hello, I already tried to ask on stackoverflow but so far without success.
(https://stackoverflow.com/questions/54292816/postgresql-logical-replication-depends-on-wal-segments)

Could someone help me please?



I am successfully using logical replication between 2 PG 11 cloud VMs for latest data. But I tried to publish also some older tables to transfer data between databases and got strange error about 
missing WAL segment.


These older partitions contain data 5-6 days old. I successfully published them on master and refreshed subscription on logical replica. But now I am getting these strange error messages on logical 
replica:


|2019-01-21 15:03:14.713 UTC [17203] LOG: logical replication table synchronization worker for subscription "mysubscription", table "mytable_20190115" has finished 2019-01-21 15:03:19.768 UTC 
[18877] LOG: logical replication apply worker for subscription "mysubscription" has started 2019-01-21 15:03:19.797 UTC [18877] ERROR: could not receive data from WAL stream: ERROR: requested WAL 
segment 0001098E00CB has already been removed 2019-01-21 15:03:19.799 UTC [29534] LOG: background worker "logical replication worker" (PID 18877) exited with exit code 1 2019-01-21 
15:03:24.806 UTC [18910] LOG: logical replication apply worker for subscription "mysubscription" has started 2019-01-21 15:03:24.824 UTC [18911] LOG: logical replication table synchronization worker 
for subscription "mysubscription", table "mytable_20190116" has started 2019-01-21 15:03:24.831 UTC [18910] ERROR: could not receive data from WAL stream: ERROR: requested WAL segment 
0001098E00CB has already been removed 2019-01-21 15:03:24.834 UTC [29534] LOG: background worker "logical replication worker" (PID 18910) exited with exit code 1 |


if you have WAL archiving enabled then try to find the missing WALs and copy 
them inside your data/pg_wal directory .
Normally the replication slot will retain all logs that have not been applied 
to the subscriber yet. So what you describe is not normal. Do you have any cron 
task that touches files in this dir?


Which is confusing for me. I tried to find some info but did not find anything 
about logical replication depending on WAL segments.

There is no streaming replication running on that particular master and these 
error messages I see on both master and replica connected with only logical 
replication.

Am I doing something wrong? Is there some special way how to publish older 
data? For newer data and latest data all works without problems.

Of course since I published like ~20 tables it took some time for replica to process all tables - currently it processes always 2 at the time. But I still do not understand why it should depend on 
WAL segments... Thank you very much.


I tried to unpublished and unsubscribe these older tables and publish and 
subscribe them again but getting still the same error message for the exactly 
the same WAL segment number.

**I unpublished and unsubscribed those problematic tables and error messages 
stopped so they were definitely related to logical replication. Could they be 
caused by snapshot?

I even made additional strange experience with WAL segments errors - my logical replica had only quite small disk and during all that fiddling I forgot to check disk usage. So postgresql on logical 
replica crashed due to full disk. Since I use GCE I just resized root disk and after restart of the instance got more space. But I also got back missing WAL segments errors in connections with 
logical replication. My postgresql log on replica is now full of sequence of these 3 lines:


|2019-01-22 09:47:14.408 UTC [1946] LOG: logical replication apply worker for subscription "mysubscription" has started 2019-01-22 09:47:14.429 UTC [1946] ERROR: could not receive data from WAL 
stream: ERROR: requested WAL segment 0001099D007A has already been removed 2019-01-22 09:47:14.431 UTC [737] LOG: background worker "logical replication worker" (PID 1946) exited with 
exit code 1 |


Why logical replication depends on some old WAL segments? Today's data seem to work perfectly although there cannot be all WAL segments for today available on the logical master. But I am unable to 
publish older data...


Thanks for help.

Josef Machytka




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: PostgreSQL logical replication depends on WAL segments?

2019-01-22 Thread Achilleas Mantzios

On 22/1/19 6:01 μ.μ., Josef Machytka wrote:


https://www.postgresql.org/docs/11/logical-replication-architecture.html


Thanks, I see... So if I understand it correctly - since I have quite big partitions like ~30 GB each in one parent table and from ~1GB to ~5 GB in several others I presume I had to set 
wal_keep_segments to some really high number and stop our security cronjob cleaning old WAL segments (because we already had some problems with almost full disk due to old WAL segments)


I asked you if you had any cron jobs messing around with your WALs. The WALs in pg_wal dir is PostgreSQL's business to manage, not your scripts. Dont do that. Defining wal_keep_segments to a very 
large value or using replication slots (as with logical repl) won't help if your crons are deleting the WALs.

Now what you have to do :
- read up about the basic concepts of pgsql administration : 
https://www.postgresql.org/docs/11/admin.html
- understand how postgresql replication slots work (logical repl is based on 
those)
- understand how WAL archiving works
- understand how streaming replication works
- understand how logical replication works
- understand how the various runtime configuration params act on the number of 
WALs kept

until the whole transfer of snapshot is done. Because only after the whole snapshot is transferred logical replication workers start to transfer WAL logs reflecting changes done from the moment 
snapshot was taken...


jm



--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



Re: PostgreSQL logical replication depends on WAL segments?

2019-01-22 Thread Achilleas Mantzios

On 22/1/19 8:59 μ.μ., Jeremy Finzel wrote:

On Tue, Jan 22, 2019 at 12:52 PM Andres Freund > wrote:


Hi,

On 2019-01-22 11:10:27 -0600, Jeremy Finzel wrote:
> P.S. do heed the advice of the others and get more familiar with
the docs
> around WAL archiving.

Logical replication doesn't normally interact with WAL archiving
in any
way, so that seems orthogonal.


But he is manually removing WAL files via cron which should normally 
be held until the replication slot doesn't need it anymore.  I do 
believe that is why he is getting errors.  The suggestion is to rely 
on Postgres' WAL archiving rather than removing WAL files manually...


wal retention in pg_wal and wal archiving are different things. The OP 
got problems cause he deliberately went on to delete files in pg_wal 
which means he used the wrong method to address a wrong situation.


However, if the OP has still those WAL files archived he can use them to 
bring logical replication back in synch. Don't ask me how I know : 
https://www.postgresql.org/message-id/ae8812c3-d138-73b7-537a-a273e15ef6e1%40matrix.gatewaynet.com




Thanks,
Jeremy


Re: PostgreSQL logical replication depends on WAL segments?

2019-01-22 Thread Achilleas Mantzios


On 22/1/19 10:18 μ.μ., Jeremy Finzel wrote:
On Tue, Jan 22, 2019 at 2:16 PM Jeremy Finzel > wrote:


any one of those WAL files will not be archived and
recycled as long as it no longer needed by a replication slot


:(.  s/no longer/still.  I meant to say any one of those WAL files 
will not be archived and recycled as long as it still needed by a 
replication slot.



A WAL segment will be archived as soon as it is completed. 
|https://www.postgresql.org/docs/11/continuous-archiving.html

|



Re: PostgreSQL logical replication depends on WAL segments?

2019-01-22 Thread Achilleas Mantzios

On 23/1/19 5:26 π.μ., Jeremy Finzel wrote:

On Tue, Jan 22, 2019 at 8:19 PM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:


On 22/1/19 10:18 μ.μ., Jeremy Finzel wrote:

On Tue, Jan 22, 2019 at 2:16 PM Jeremy Finzel mailto:finz...@gmail.com>> wrote:

any one of those WAL files will not be archived and recycled as 
long as it no longer needed by a replication slot


:(.  s/no longer/still.  I meant to say any one of those WAL files will not 
be archived and recycled as long as it still needed by a replication slot.



A WAL segment will be archived as soon as it is completed. 
|https://www.postgresql.org/docs/11/continuous-archiving.html|


Yep, you’re right. Let me try this again. It will be archived, but the segment 
won’t be reused or removed as long as the slot needs it. Is that an accurate 
statement?

That's better :) Also it depends on the previous checkpoint.



Thanks,
Jeremy

||

||




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



  1   2   3   >