How to migrate database from 10.1 to 9.1

2018-05-13 Thread Andrus

Database is created in

   "PostgreSQL 10.1 on x86_64-pc-mingw64, compiled by gcc.exe (Rev5, Built 
by MSYS2 project) 4.9.2, 64-bit"


and contains tables with autogenerated primary keys like

   CREATE TABLE public.logifail
   (
 id integer NOT NULL DEFAULT nextval('logifail_id_seq'::regclass),
 ..
 CONSTRAINT logifail_pkey PRIMARY KEY (id),
 CONSTRAINT logifail_id_check CHECK (id > 0)
   )
   WITH (
 OIDS=FALSE
   );

if this database is restored to Postgres 9.1 from custom backup using


   pg_restore --role=$1_owner --no-owner --dbname=$1 --jobs=4 --verbose 
--username=postgres

automatic primary key generation stops working.

id column is restored without sequnece. Table becomes:

   CREATE TABLE logifail
   (
 id integer NOT NULL,
  ...
 CONSTRAINT logifail_pkey PRIMARY KEY (id),
 CONSTRAINT logifail_id_check CHECK (id > 0)
   )
   WITH (
 OIDS=FALSE
   );

How to fix this so that sequences are also restored ?
I tried to restore using both 10.1 and 9.1 pg_restore but autogenerated 
primary key columns are still lost.


Posted also in

https://stackoverflow.com/questions/50317935/how-to-ove-database-from-postgres-10-1-to-9-1

Andrus. 





How to create StartsWith function for char(n) type with ending space

2018-07-06 Thread Andrus

Hi!

I need to create startswith function which returns true if  char(n) database
column starts with some characters which
may can contain space at end.
Spaces should treated like other characters.

Using sample data below

startswith( test, 'A')
startswith( test, 'A  ')
StartsWith(test, rpad('A',19) )

shuld return true

but

startswith( test, RPAD( 'A', 20))  should return false  since there is extra
space in end of check string

Database contains test column which has char(20) type column and this cannot
changed.

I tried code below but it returns false.

How to fix this so that it returns true?
Using Postgres starting from 9.1

Andrus.

CREATE or replace FUNCTION public.likeescape( str text )
--
https://stackoverflow.com/questions/10153440/how-to-escape-string-while-matching-pattern-in-postgresql
RETURNS text AS $$
SELECT replace(replace(replace($1,'^','^^'),'%','^%'),'_','^_') ;
$$ LANGUAGE sql IMMUTABLE;

CREATE or replace FUNCTION public.StartWith( cstr text, algusosa text )
RETURNS bool AS $$
SELECT $2 is null or $1 like likeescape($2) ||'%' ESCAPE '^' ;
$$ LANGUAGE sql IMMUTABLE;

create temp table test ( test char(20) ) on commit drop;
insert into test values ('A' );

select StartWith(test, 'A ' ) from test

posted also in

https://stackoverflow.com/questions/51206529/how-to-create-startswith-function-for-charn-type-with-ending-space 





How to transfer databases form one server to other

2020-01-26 Thread Andrus

Hi!

VPS server has old Debian 6 Squeeze with Postgres 9.1
It has 24 databases.

Every night backup copies are created using pg_dump to /root/backups 
directory for every database.

This directory has 24 .backup files with total size 37 GB.

I installed new VPS server with Debian 10 and Postgres 12.

How to transfer those databases to new server ?

Both server have ssh and root user, postgres port 5432  open,  100 MB 
internet connection and fixed IP addresses. In night they are not used by 
users, can stopped during move.


Should I download .backup files and use pg_restore or use pipe to restore 
whole cluster.


Andrus. 






Re: How to transfer databases form one server to other

2020-01-27 Thread Andrus

Hi!

Before you do any of this I would check the Release Notes for the first 
release of each major release. Prior to version 10 that would be X.X.x 
where X is a major release. For 10+ that is X.x.  I would also test the 
upgrade before doing it on your production setup.


I want to create test transfer first, check applications work and after that 
final transfer.


Best practice if you are going the dump/restore route is to use the pg_dump 
binary from the new server(12) to dump the old server(9.1)


Postgres version 12 pg_dump probably cannot installed in old server (Debian 
Squeeze 9).
Running pg_dump in new server probably takes much more time since data is 
read from uncompressed form and dumping is time-consuming process.
(internet connection between those server is fast, SSH copy speed was 800 
Mbit (not 100 Mbit as I wrote), it took 5 minutes to copy 37 GB).


There are also some hundred of Postgresql login and group roles in old 
server used also in access rights in databases.

Those needs transferred also.

My plan is:

1. Use pg_dump 9.1 in old server to create 24 .backup files in custom 
format.
2. Use pgAdmin "backup globals" command to dump role definitions is old 
server to text file.
3. Manually edit role definitions to delete role postgres since it exists in 
new server.
4. Run edited role definitons script using pgadmin in new server to create 
roles

5. Use Midnight Commander to copy 24 .backup files from old to new server
6. Use Postgres 12 pg_restore with job count 4 to restore  those 24 
databases to new server sequentially.


To repeat transfer after testing:

1. Delete restored databases.
2. Delete imported roles in new server
3. Proceed 1-6 from plan again.


Questions:

1. pgAdmin allows only deletion roles one by one.
Deleting hundreds of roles is huge work.
How to invoke command like

DELETE ALL ROLES EXCEPT postgres

?
Is there some command, script or pgadmin GUI for this ?

2. Is it OK to restore from 9.1 backups or should I create backups using 
pg_dump from Postgres 12 ?

I have done some minor testing and havent found issues.

3. How to create shell script which reads all files from /root/backup 
directory from old server?

(I'm new to linux, this is not postgresql related question)

4. Are there some settings which can used to speed up restore process ? Will 
turning fsync off during restore speed up it ?
New server has 11 GB ram . No other applications are running during database 
transfer.

shared_buffer=1GB setting is currently used in postgresql.conf

5. Can this plan improved

Andrus.







Re: How to transfer databases form one server to other

2020-01-27 Thread Andrus

Hi!


3. Manually edit role definitions to delete role postgres since it exists
in new server.

No need, it will throw a harmless error message and continue on.


By my knowledge, pgAdmin executes script in single transaction and rolls it
back on error.
Should psql used or is there some option in pgadmin.


To repeat transfer after testing:

1. Delete restored databases.
2. Delete imported roles in new server

That will probaly not end well. I'm guessing there are objects that have a
dependency on the the roles.


If imported databases are dropped before, there will be hopefully no
dependencies.

Andrus. 






Re: How to transfer databases form one server to other

2020-01-27 Thread Andrus

Hi!


Postgres version 12 pg_dump probably cannot installed in old server
(Debian Squeeze 9).



I have it installed on my Debian 9 laptop, from the PostgreSQL apt repo:



 https://apt.postgresql.org


Oled server uses Debian Sqeeze whose version is 6.
I mistakenly typed Debian Squeeze 9, I'm sorry.

Andrus.





How to restore to empty database

2020-01-30 Thread Andrus

Hi!

I want to restore to new empty database using pg_restore .
pg_restore should create new empty database and restore into it.
If database exists, pg_restore should clean all data from it or drop and 
create new empty database before restore.


According to pg_restore doc, switches --clean --create --if-exists  should 
do this.


I tried

   PG_COLOR=auto
   PGHOST=localhost
   PGPASSWORD=mypass
   PGUSER=postgres
   export PGHOST  PGPASSWORD PG_COLOR PGUSER
   pg_restore --clean --create --if-exists --dbname=mydb  --jobs=4 --verbose 
"mydb.backup"


but got error

pg_restore: connecting to database for restore
pg_restore: error: connection to database "mydb" failed: FATAL:  database 
"mydb" does not exist


I also tried without --dbname=mydb but then got error

pg_restore: error: one of -d/--dbname and -f/--file must be specified

How to restore to database which does not exist to drop existing database 
before restore if it exists ?

Should I invoke

drop database if exists

and

create database

commands before calling pg_restore or can pg_restore do it itself ?

Using Postgres 12 on Debian 10

Andrus. 






Re: How to restore to empty database

2020-01-30 Thread Andrus

Hi!


You need to connect to a database that exists with --dbname, for
instance --dbname=postgres. Postgres will then use that connection to
create the new database, in your case mydb.


Thank you, this seems work.

There are total 24 databases, .backup files total size in 37GB , aprox 60 %
from this from bytea columns ( pdf documents, images).
Using VPS server, 4 cores, 11 GB RAM, used only for postgres.
Which is the fastest way to restore data from all of them to empty
databases. Should I run all commands in sequence like

pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database1.backup"
pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database2.backup"
...
pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database24.backup"

or run them all parallel without --jobs=4 like

pg_restore --clean --create --if-exists --verbose --dbname=postgres
"database1.backup" &
pg_restore --clean --create --if-exists --verbose --dbname=postgres
"database2.backup" &
...
pg_restore --clean --create --if-exists --verbose --dbname=postgres  --jobs=4
"database24.backup" &


or some balance between those ?
Is there some postgres or Debian setting which can used during restore time
to speed up restore ?
I use shared_buffers=1GB , other settings from debian installation.

Andrus. 






Re: How to restore to empty database

2020-01-30 Thread Andrus

Hi!

These days 37 GB is relatively small, so you maybe getting into the realm 
of premature optimization. Do the 24 databases represent an entire cluster 
you are trying to transfer?


Basically yes.
Cluster contains also  small test database which actually does not need 
transferred but I can delete it manually after transfer.
Also postgres, template0 and template1 are not used directly by applications 
and probably does not need to be transferred.



If so have you looked at pg_dumpall?:
https://www.postgresql.org/docs/12/app-pg-dumpall.html
It is a text based backup, but it will include all the databases and the 
globals.


Source cluster is in old Debian 6 Squeeze running Postgres 9.1
Should I create pipe using pg_dumpall and restore everything from old using 
pipe instead of pg_dump/pg_restore ?


Andrus.





How to restore roles without changing postgres password

2020-02-11 Thread Andrus

Hi!

How to create backup script which restores all roles and role memberships 
from other server without changing postgres user password.


I tried shell script

PGHOST=example.com
PGUSER=postgres
PGPASSWORD=mypass
export PGHOST  PGPASSWORD  PGUSER
pg_dumpall --roles-only --file=globals.sql
psql -f globals.sql postgres

but this changes user postgres  password also.
How to restore roles so that postgres user password is not changed on 
restore.


Script runs on Debian 10 with Postgres 12
Server from where it reads users runs on Debian Squeeze with Postgres 9.1

Andrus 






Re: How to restore roles without changing postgres password

2020-02-11 Thread Andrus
Hi!

Thank you.

>pg_dumpall creates an SQL file which is just a simple text file

>you can then edit sql removing postgres user from  the file
>This can be automated in a script that searches the generated sql file for the 
>postgres user  replacing it with a blank/empty line or adds -- to the bringing 
>of >the line which comments it out.  

This script creates cluster copy in every night. So this should be done 
automatically.
I have little experience with Linux. 
Can you provide example, how it should it be done using sed or other tool. 
There is also second user named dbandmin whose password  cannot changed also.

It would be best if  CREATE ROLE and ALTER ROLE  clauses for postgres and 
dbadmin users are removed for file.

Or if this is not reasonable, same passwords or different role names can used 
in both clusters.

Also I dont understand why GRANTED BY clauses appear in file. This looks like 
noice. 
GRANT documentation
https://www.postgresql.org/docs/current/sql-grant.html

does not contain GRANTED BY clause. It looks like pg_dumpall generates 
undocumented clause.

Andrus.


Re: How to restore roles without changing postgres password

2020-02-12 Thread Andrus
Hi!

>Not a bad idea,  would want to extend this to all the roles on the server not 
>just postgres  

>I've  edited the global dump many times  removing/editing table spaces, 
>comment old users, etc..  

Maybe it is easier to create plpgsql procedure which returns desired script as 
text.
Or it retrieves globals from other cluster using dblink and applies changes to 
new cluster.

This can be called instead of pq_dumpall and can edited for custom needs.
Editing plpgsql script is easier for postgres users than creating sed script to 
delete commands from sql file.

Andrus.

How to fix 0xC0000005 exception in Postgres 9.0

2020-02-20 Thread Andrus

Hi!

Every time when user tries to log on from same virtual computer where 
Postgreql resides, Postgres terminates with 0xC005 exception. Log is 
below.


About 65 users are accessing this server over VPN. They can use server 
normally.

Logging from localhost using pgadmin 4 with user postgres also works.

Only any attempt to log on from localhost with user dbadmin from psqlODBC 
client causes this exception.


It has worked normally for many years but now suddenly stopped working for 
localhost.


How to fix this ?

Server:

PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 64-bit

OS:

Windows server 2008 R2
Version 6.1 Build 7601

Andrus.

Postgres log:

2020-02-20 15:44:51 EET   LOG:  server process (PID 3788) was terminated by 
exception 0xC005
2020-02-20 15:44:51 EET   HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.
2020-02-20 15:44:51 EET   LOG:  terminating any other active server 
processes
2020-02-20 15:44:51 EET andrus mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET andrus mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET andrus mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET kati mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET kati mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET kati mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET julia mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET julia mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET julia mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET reinpuu mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET reinpuu mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET reinpuu mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET kokka mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET kokka mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET kokka mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET myll mydatabase WARNING:  terminating connection 
because of crash of another server process
2020-02-20 15:44:51 EET myll mydatabase DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2020-02-20 15:44:51 EET myll mydatabase HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-20 15:44:51 EET   WARNING:  terminating connection because of crash 
of another server process
2020-02-20 15:44:51 EET   DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another 
server process exited abnormally and possibly corrupted shared memory.
2020-02-20 15:44:51 EET   HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2020-02-20 15:44:51 EET dbadmin mydatabase FATAL:  the database system is in 
recovery mode
2020-02-20 15:44:51 EET   LOG:  all server processes terminated; 
reinitializing
2020-02-20 15:45:01 EET   FATAL:  pre-existing shared memory block is still 
in use
2020-02-20 15:45:01 EET   HINT:  Check if there are any old server processes 
still running, and terminate them. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-20 Thread Andrus

Hi!


Upgrade to a version of Postgres that is not 5 years past EOL?


Not possible at moment because requires lot of testing not to break existing 
applications.

Planned in future.

Andrus.





Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-20 Thread Andrus

Hi!


Given this is localhost connection,  start looking at firewall or AV.


Windows firewall is turned off. It does not affect to local connections in 
same computer.

I turned windows antivirus off but problem persists.

Andrus.





Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


Realized I should have been clearer. By pre-libpq I meant this:
https://odbc.postgresql.org/docs/release.html
psqlODBC 09.05.0100 Release
Changes:
Use libpq for all communication with the server
Previously, libpq was only used for authentication. Using it for all 
communication lets us remove a lot of duplicated code. libpq is now 
required for building or using libpq.


I upgraded psqlodbc driver to 12.1 version but problem persists.
After server is manually started, application works.

I added log_statement = 'all' . Log before crash is:

2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: SELECT 
drop_table('temptulemus')
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: create temp table 
temptulemus as SELECT * FROM andmetp  ;select * from temptulemus limit 0
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: select n.nspname, 
c.relname, a.attname, a.atttypid, t.typname, a.attnum, a.attlen, 
a.atttypmod, a.attnotnull, c.relhasrules, c.relkind, c.oid, 
pg_get_expr(d.adbin, d.adrelid), case t.typtype when 'd' then t.typbasetype 
else 0 end, t.typtypmod, c.relhasoids, '', c.relhassubclass from 
(((pg_catalog.pg_class c inner join pg_catalog.pg_namespace n on n.oid = 
c.relnamespace and c.oid = 28203181) inner join pg_catalog.pg_attribute a on 
(not a.attisdropped) and a.attnum > 0 and a.attrelid = c.oid) inner join 
pg_catalog.pg_type t on t.oid = a.atttypid) left outer join pg_attrdef d on 
a.atthasdef and d.adrelid = a.attrelid and d.adnum = a.attnum order by 
n.nspname, c.relname, attnum
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: select COUNT(*)::int 
from temptulemus
2020-02-21 18:46:40 EET mydbadmin mydb LOG:  statement: ;SELECT * FROM 
temptulemus offset 0 limit 900
2020-02-21 18:46:40 EET   LOG:  server process (PID 6000) was terminated by 
exception 0xC005
2020-02-21 18:46:40 EET   HINT:  See C include file "ntstatus.h" for a 
description of the hexadecimal value.
2020-02-21 18:46:40 EET   LOG:  terminating any other active server 
processes


So command which causes crash is

SELECT * FROM temptulemus offset 0 limit 900

As shown in lines before this is data from andmetp table. This table 
contains text type column. This column may contain data like 

Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!

forgot to say publish the Linux logs it may have more details what is going 
on


Linux server has older application client which replicates all data one from 
andmetp table using select * from andmetp

I tried several times. Linux log contains

2020-02-21 16:18:50 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp
2020-02-21 16:18:50 EET mydbadmin mydb FATAL:  connection to client lost
2020-02-21 16:18:50 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp
2020-02-21 17:19:08 EET mydbadmin mydb LOG:  could not send data to client: 
Connection reset by peer

2020-02-21 17:19:08 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp
2020-02-21 17:19:08 EET mydbadmin mydb FATAL:  connection to client lost
2020-02-21 17:19:08 EET mydbadmin mydb STATEMENT:  SELECT * FROM andmetp

It looks like only connection was terminated.  “dmesg -T” does not show 
postgres crash. Maybe postgres main process is not killed in Linux.



is this happening from any client or just a specific client running ODBC?


This happene if Windows 2008 server if logged in same same server from RDP 
client and in my development windows 10 workstation which logs to linux 
server over VPN



are the clients running AV if so are the AV versions the same?


In windows 2008 server antivirus was turned off like I wrote.  In my devel 
workstation I use Microsoft antivirus coming with windows 10.


Given this is killing a Linux server,  sounds like ODBC is sending  back 
garabage data to the server crashing it.


I can try publish odbc log if this helps.

Application replicateds some tables at startup to local disk.
For tables with 5000 or more rows odbc connection hangs if there is big TCP 
packet loss.

So we changed  replication command

select * from mytable

to commands

select drop_table(‘temptulemus’);
create temp table  temptulemus as select * from mytable;

select * from mytable offset 0 limit 900;
select * from mytable offset 900 limit 900;
select * from mytable offset 1800 limit 900;
...

etc.

In this case data was also retrieved on poor connections.
Maybe there is some better solution for this.

There are several settings in OBDC, to change how text columns are 
processed, play around with those settings see if that helps it

https://odbc.postgresql.org/docs/config.html


Application needs certain settings. Maybe it is better try to upgrade to 
Postgres 12.2 first.


As you have it narrowed down to a table,  try querying only a few records 
at a time to see if you can identify the specific Record(s) that may be the 
issue.
SELECT * FROM temptulemus where temptulemus.unique_id >1  offset 0 limit 
100
and try querying the columns that do not contain the suspect data that 
could be causing this


andmetp table contains 584 record in linux server. I tried script to read 
data from this table every time one more row


for i=1 to 600
? i
StartTextMerge()
TEXT TEXTMERGE NOSHOW
select * from andmetp limit <>
ENDTEXT
IF !TExec()
 RETURN .f.
 ENDIF
endfor

this worked without error.

Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


Yes publish the ODBC logs


I send psqlodbc log from windows server to you.

I added and removed Debug=1 parameter from odbc conncetion string, ran 
application as administrator.

Other errors

2020-02-21 21:27:30 EET  ERROR:  invalid memory alloc request size 
4294967293
2020-02-21 21:27:30 EET STATEMENT:  ;SELECT * FROM temptulemus offset 0 
limit 900


and

2020-02-21 21:25:37 EET ERROR:  could not open relation with OID 538386464
2020-02-21 21:25:37 EET STATEMENT:  ;SELECT * FROM temptulemus offset 0 
limit 900


also occured.


Hello FOXPRO CODE

Yes.
Starting at psqlODBC 09.05.0100 when it uses libpq for all FoxPro does not 
show error message details anymore.

I posted it it

https://stackoverflow.com/questions/54978713/how-to-get-psqlodbc-error-details-in-visual-foxpro

Maybe anybody has some idea how to get postgres error message details using 
new psqlodbc drivers.
I looked into odbc description and havent found how error message details 
are returned.


Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


In psql what does \d tempestuous show?
What relation is andmetp to tempestuous?


I'm sorry, this was typo. Correct commands issued from application are

select drop_table('temptulemus');
create temp table  temptulemus as select * from andmetp;
select * from temptulemus  offset 0 limit 900;

I thought you said you had problem with same table in Linux and Windows 
servers?


Yes.


From above what does the drop_table() function do in?:
select drop_table(‘temptulemus’);


drop_table is defined as

CREATE OR REPLACE FUNCTION drop_table(TEXT)
 RETURNS VOID STRICT LANGUAGE plpgsql AS $$
   BEGIN
   EXECUTE 'DROP TABLE ' || $1;
   EXCEPTION WHEN UNDEFINED_TABLE THEN
   RETURN;
   END;
   $$;


Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


Alright so what does:
\d temptulemus


Did not find any relation named "temptulemus".


and/or
\d andmetp


  Table "public.andmetp"
  Column   | Type  | Collation | Nullable | Default
+---+---+--+-
andmetp| character(25) |   | not null |
klass  | character(1)  |   |  |
andmeklass | character(10) |   |  |
otsbaas| character(8)  |   |  |
kiirpref   | character(80) |   |  |
kiirnimi   | character(10) |   |  |
inlahte| character(10) |   |  |
vaateindks | character(10) |   |  |
tyhjakeeld | ebool |   |  |
whenting   | character(50) |   |  |
kiirindks  | character(10) |   |  |
validting  | text  |   |  |
valivaljad | character(10) |   |  |
valivali2  | character(10) |   |  |
userlang   | ebool |   |  |
valifilter | character(80) |   |  |
paring | character(60) |   |  |
allrows| ebool |   |  |
html   | text  |   |  |
specialwhe | text  |   |  |
pakuvalik  | text  |   |  |
klikkprots | text  |   |  |
valivali3  | character(10) |   |  |
Indexes:
   "andmetp_pkey" PRIMARY KEY, btree (andmetp)
Referenced by:
   TABLE "desktop" CONSTRAINT "desktop_alamklass_fkey" FOREIGN KEY 
(alamklass)

REFERENCES andmetp(andmetp) ON UPDATE CASCADE DEFERRABLE
Triggers:
   andmetp_trig BEFORE INSERT OR DELETE OR UPDATE ON andmetp FOR EACH 
STATEMENT

EXECUTE PROCEDURE setlastchange()

Andrus.





Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-21 Thread Andrus

Hi!


To me the relevant part of the log is below. Not sure what it means though:
[3604-0.187] execute.c[Exec_with_parameters_resolved]444:
stmt_with_params = ';SELECT * FROM temptulemus offset 0 limit 900'
[3604-0.187] execute.c[Exec_with_parameters_resolved]449: about to begin
SC_execute
[3604-0.187]statement.[SC_execute]2037:   it's NOT a select
statement: stmt=005FE040


Maybe issue occurs when ODBC client sends command to odbc driver which 
contains semicolon as first character before SELECT and returned data

contains certain characters and server or client has certain configuration.
Maybe I will change application not to send semicolon before SELECT 
statement.


Andrus. 






How to get error message details from libpq based psqlODBC driver (regression)

2020-02-22 Thread Andrus

Hi!

I'm looking for a way to fix psqlODBC driver regression.

Starting at psqlODBC 09.05.0100 when psqlODBC driver uses libpq for all
operations (earlier versions used libpg only for authentication) ODBC client
does not show error message details.

For example, users got only generic error message like

   Connectivity error: ERROR: insert or update on table "mytable" violates
   foreign key constraint  "mytable_myfield_fkey

Error message details like

   Key (myfield)=(mykeyvalue) is not present in table "mymastertable".;

are no more returned.

How to detailed error message or at least only bad key value "mykeyvalue"
which is returned in error message details ?

Some ideas:

1. First versions of libpq based ODBC drivers returned same error message 
"no resource found" for all errors. Hiroshi has fixed it it later versions. 
Maybe psqlODBC code can fixed to restore pre-libpq behaviour.


2. Maybe analyzing odbc logs from pre and after 09.05.0100  drivers may 
provide solution. I can provide ODBC log files.


3. psqlODBC allows to pass parameters to libpq from connection string. Maybe 
some libpq parameter can fix this.


4.  Maybe some Postgres query, postgres extension  or direct odbc or libpq 
call can used to return last error message details like Windows API 
GetLastError() or Unix global errno.


5. Maybe it is possible to to create method which returns error message 
detals from postgres log file.


Postgres 12.2 and latest psqlODBC driver 12.01. are used.
psqlODBC is called from Visual FoxPro

Andrus. 






Re: How to fix 0xC0000005 exception in Postgres 9.0

2020-02-22 Thread Andrus

Hi!


I have no idea.


I changed application not to send ; before select.
This solves issues in both servers.

So using ; as first character before select in ODBC command like

;SELECT * from mytable

Causes C5 is Postgres 9.0 and ODBC client hangup with "connection reset by
peer" message in log file in Postgres 9.6
It was great help and especially great ODBC log analyzing in this list,
thanks.


For psqlODBC issues I would suggest asking here:
https://www.postgresql.org/list/pgsql-odbc/
Chances are better that there will be someone there that could answer you
questions.


I posted error message details issue long time ago in this pgsql-odbc list 
but havent got solution.

I posted it as separate message here and in

https://stackoverflow.com/questions/60357505/how-to-fix-psqlodbc-driver-regression-to-get-error-message-details

Andrus. 






Re: How to get error message details from libpq based psqlODBC driver (regression)

2020-02-23 Thread Andrus

Hi!


What does the log_error_verbosity setting in postgresql.conf show?


It is not set. postgresql.conf contains it default value from installation:

#log_error_verbosity = default# terse, default, or verbose messages

I changed it to

log_error_verbosity = verbose

but problem persists.

postgres log file contains

2020-02-23 09:02:27.646 GMT [11252] ERROR:  23503: insert or update on table 
"rid" violates foreign key constraint "rid_yhik_fkey"
2020-02-23 09:02:27.646 GMT [11252] DETAIL:  Key (yhik)=(xx) is not 
present in table "mootyhik".
2020-02-23 09:02:27.646 GMT [11252] LOCATION:  ri_ReportViolation, 
d:\pginstaller_12.auto\postgres.windows-x64\src\backend\utils\adt\ri_triggers.c:2474
2020-02-23 09:02:27.646 GMT [11252] STATEMENT:  insert into rid (dokumnr, 
yhik) values (2065, 'xx')



but application shows only

ERROR: insert or update on table "rid" violates foreign key constraint 
"rid_yhik_fkey"


Andrus. 






Re: How to get error message details from libpq based psqlODBC driver (regression)

2020-02-23 Thread Andrus

Hi!


I don't have an answer for you. There maybe someone else on this list that
could help, though I think your best bet would be to ask the question again
on the pgsql-odbc list.


I posted it in pgsql-odbc list.

In pgsql odbc source code file connection.c: line 866

contains:

errprimary = PQresultErrorField(pgres, PG_DIAG_MESSAGE_PRIMARY);

which probably gets only primary error message.
To get error message detail,  PG_DIAG_MESSAGE_DETAIL should used according 
to https://www.postgresql.org/docs/current/libpq-exec.html.

Unfortunately PG_DIAG_MESSAGE_DETAIL  is not used in pgsql-odbc source code.

Andrus. 






Postgres 12 backup in 32 bit windows client

2020-03-28 Thread Andrus

Hi!

Postgres 12 is installed in Debian 10 server.
Client computer is running 32 bit windows.

pg_dump.exe which can downloaded as part of Postgres 12 windows installation 
is only 64 bit version.


How to make backup from Postgres 12  in 32 bit Microsoft windows computer ?
psqlodbc is used for data access.
Maybe there is 32 -bit version of pg_dump.exe.

Andrus. 






Re: Postgres 12 backup in 32 bit windows client

2020-03-28 Thread Andrus

Hi!


What are you going to do with this backup?


If server disk crashes, it can used to restore data.

Do you have shell access to the Debian machine in order to do the backup 
there?


I have but my application user in 32bit windows does not have.
I want to allow 32 bit windows user to make backup from 5432 port directly 
to her computer C: drive.


If Postgres 12 32bit pg_dump is not available maybe it is possible to create 
stored procedure which invokes pg_dump in server and returns .backup file as 
bytea value to client:


select run('pg_dump mydatabase')

Andrus.


--
Adrian Klaver
adrian.kla...@aklaver.com 






could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-28 Thread Andrus

Hi!

Postgres 12 database dump is created in Debian 10 using pg_dump .

Trying to restore it in Windows 10  using

pg_restore --clean --create --if-exists --dbname=postgres --jobs=8 --no-password 
mydb.backup


produces strange message

pg_restore: WARNING:  could not determine encoding for locale "et_EE.UTF-8": 
codeset is "CPUTF-8"


How to fix this ?

Debian and Windows computer have same settings:

Latest Postgres 12  is used
OS and database locales are  Estonian
Database encoding is UTF-8

Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-28 Thread Andrus

Hi!

In the Debian Postgres instance in psql what does \l show for the 
databases?


#psql namm postgres
psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.

namm=# \l namm
 List of databases
Name |   Owner| Encoding |   Collate   |Ctype| Access 
privileges

--++--+-+-+---
namm | namm_owner | UTF8 | et_EE.UTF-8 | et_EE.UTF-8 | =Tc/namm_owner 
+
 ||  | | | 
namm_owner=CTc/namm_owner

(1 row)


In the Windows 10 command prompt what does systeminfo show?


Host Name: SERVER2
OS Name:   Microsoft Windows 10 Pro
OS Version:10.0.18363 N/A Build 18363
OS Manufacturer:   Microsoft Corporation
OS Configuration:  Standalone Workstation
OS Build Type: Multiprocessor Free
Registered Owner:  Windows User
Registered Organization:
Product ID:00330-70008-16217-AAOEM
Original Install Date: 05.09.2019, 9:16:41
System Boot Time:  28.03.2020, 11:05:23
System Manufacturer:   Gigabyte Technology Co., Ltd.
System Model:  Q270M-D3H
System Type:   x64-based PC
Processor(s):  1 Processor(s) Installed.
  [01]: Intel64 Family 6 Model 158 Stepping 9 
GenuineIntel ~3601 Mhz

BIOS Version:  American Megatrends Inc. F1, 09.01.2017
Windows Directory: C:\WINDOWS
System Directory:  C:\WINDOWS\system32
Boot Device:   \Device\HarddiskVolume3
System Locale: et;Eesti
Input Locale:  et;Eesti
Time Zone: (UTC+02:00) Helsingi, Kiiev, Riia, Sofia, 
Tallinn, Vilnius

Total Physical Memory: 16 286 MB
Available Physical Memory: 12 032 MB
Virtual Memory: Max Size:  18 718 MB
Virtual Memory: Available: 14 867 MB
Virtual Memory: In Use:3 851 MB
Page File Location(s): C:\pagefile.sys
Domain:WORKGROUP
Logon Server:  \\SERVER2
Hotfix(s): 18 Hotfix(s) Installed.
  [01]: KB4534132
  [02]: KB4497165
  [03]: KB4498523
  [04]: KB4503308
  [05]: KB4515383
  [06]: KB4515530
  [07]: KB4516115
  [08]: KB4517245
  [09]: KB4520390
  [10]: KB4521863
  [11]: KB4524244
  [12]: KB4524569
  [13]: KB4528759
  [14]: KB4532441
  [15]: KB4537759
  [16]: KB4538674
  [17]: KB4541338
  [18]: KB4551762
Network Card(s):   2 NIC(s) Installed.
  [01]: TAP-Windows Adapter V9
Connection Name: Ethernet 4
Status:  Media disconnected
  [02]: Intel(R) Ethernet Connection (2) I219-LM
Connection Name: Ethernet 3
DHCP Enabled:Yes
DHCP Server: 192.168.91.1
IP address(es)
[01]: 192.168.91.154
[02]: fe80::94d:b1c:3945:bc8a
[03]: 
2001:7d0:4c83:4c80:257f:b077:e1f7:21e1

[04]: 2001:7d0:4c83:4c80:94d:b1c:3945:bc8a
Hyper-V Requirements:  VM Monitor Mode Extensions: Yes
  Virtualization Enabled In Firmware: Yes
  Second Level Address Translation: Yes
  Data Execution Prevention Available: Yes

Andrus.





Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


Generally speaking, locale names from Unix systems won't work at all on
Windows.  You need to create the database manually with whatever seems
to be the closest Windows locale match, and then restore its contents
without using --create.


This is unattended script running in every night from .bat file.

How to replace pg_restore --create  option with   psql and/or createdb calls
and specify proper locale for them ?

Currently everthing has "Estonian_Estonia.1257"  locale in windows.
Which locale name should be specified in Windows instead of this?
Or maybe creating new template with proper encoding or changing template0 
encoding helps?


Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


System Locale: et;Eesti
Input Locale:  et;Eesti
Hmm, I was expecting to see et_EE though I will admit to not truly 
understanding how Windows does locales.
I should have asked earlier, in the Postgres instance on Windows what does 
\l show for template0?


"D:\Program Files\PostgreSQL\12\bin\psql"  postgres postgres

psql (12.2)
WARNING: Console code page (775) differs from Windows code page (1257)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.

postgres=# \l template0
   List of databases
  Name|  Owner   | Encoding |Collate| Ctype 
|   Access privileges

---+--+--+---+---+---
template0 | postgres | UTF8 | Estonian_Estonia.1257 | 
Estonian_Estonia.1257 | =c/postgres  +
  |  |  |   | 
| postgres=CTc/postgres

(1 row)

Andrus. 






Hot standby from Debian to Windows

2020-03-29 Thread Andrus

Hi!

Postgres 12 server is running on Debian 10 and has number of databases.

How to mirror changes of those databases to Windows 10 workstation which 
runs also Postgres 12.


Changes in server databases in Debian should sent to Postgres 12 database in 
Windows over internet.


If Debian server goes down, users can change server address to Windows 
computer as temporary workaround and continue working.


Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


There seems to a difference of opinion of what Baltic Code Page to use:
https://en.wikipedia.org/wiki/Code_page_775
https://en.wikipedia.org/wiki/Windows-1257
The post below shows a users method of dealing with this for another CP:
https://www.postgresql.org/message-id/549275CC.4010607%40gmail.com


Console code page 775 message appears since psql is console application 
running from command line.


It does not have any relation to pg_dump/pg_restore issue since console code 
page is not used in this case.


There is Estonian locale everywhere.
Maybe this warning is harmless since Linux code page is ignored and default 
collation is used.
All table definitions in restored database contain references to default 
collation:


CREATE TABLE firma1.acquirpo
(
   kassanr numeric(3,0) NOT NULL,
   policyid character(2) COLLATE pg_catalog."default" NOT NULL,
   trantype character(6) COLLATE pg_catalog."default",
   tacdefault character(10) COLLATE pg_catalog."default",
   tacdenial character(10) COLLATE pg_catalog."default",
   taconline character(10) COLLATE pg_catalog."default",
   floorlimit numeric(12,0),
   randselthr numeric(12,0),
   minrandper numeric(2,0),
   maxrandper numeric(2,0),
   CONSTRAINT acquirpo_pkey PRIMARY KEY (kassanr, policyid)
)

TABLESPACE pg_default;

Same warning appears two times. This command execute by pg_restore probably 
causes this (harmless?)  warning:


CREATE DATABASE mydb
   WITH
   OWNER = mydb_owner
   ENCODING = 'UTF8'
   LC_COLLATE = 'et_EE.UTF-8'
   LC_CTYPE = 'et_EE.UTF-8'
   TABLESPACE = pg_default
   CONNECTION LIMIT = -1;

If new database is created manually in windows cluster the following command 
is generated:



CREATE DATABASE mydbmanually
   WITH
   OWNER = mydbmanually_owner
   ENCODING = 'UTF8'
   LC_COLLATE = 'Estonian_Estonia.1257'
   LC_CTYPE = 'Estonian_Estonia.1257'
   TABLESPACE = pg_default
   CONNECTION LIMIT = -1;

Andrus.





Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!


Per my previous post, you might try adding something like:
cmd.exe /c chcp 1257
to the top of the batch file. This idea came from here:
https://stackoverflow.com/questions/20794035/postgresql-warning-console-code-page-437-differs-from-windows-code-page-125
If I am following the error correctly then the issue is that the Postgres 
console programs are using CP755 and that is not something for which there 
is an automatic conversion:

https://www.postgresql.org/docs/12/multibyte.html#id-1.6.10.5.7
There is a conversion for 1257 clients so having your console run as 1257 
should solve the problem. Someone with more current experience on Windows 
will need to comment on whether that is the viable or best solution.


Both servers have UTF-8 encoding.

Non-unicode code page 755 referes only to command line applications like 
psql.


Postgres service, pg_dump and pg_restore do not use console codepages any 
way, they operate using only UTF-8 character set since both databases are in 
UTF-8


I think console code page warning message is not related to this issue.

Andrus.





Re: Hot standby from Debian to Windows

2020-03-29 Thread Andrus

Hi!

Since you are moving between different OSes you will need to use some form 
of logical replication as binary replication will not work.


I can use Hyper-V or something other to run Debian with Windows.

This hopefully will also allow to bypass Windows 20 connection limit so that 
more than 20 users can connect.


Given  that you are Postgres 12 you could use the builtin logical 
replication:

https://www.postgresql.org/docs/12/logical-replication.html


I see possibilities:

1. Use Hyper-V to run Debian container and Postgres binary replication
2. Use Hyper-V to run Debian container and Postgres logical replication
3. Use Windows and Postgres logical replication.
4. Use Hyper-V to run Debian container and backup utility  for whole disk 
block level backup



How that needs to managed is going to need more information. As a start:
1) Downtime allowed?


Yes. If main server stops, I can tell users to enter backup server address 
instead of main server.



2) All databases to be replicated or just some?


I can create separate cluster so that all databases and users will be 
replicated.
There are 30 databases with total size 70 GB. Size will increase slowly 
every day when new data is entered.
There are some test and demo databases whose replcation is not really 
required but those can also replicated if this



3) Permissible lag between servers?


Currently backups are created every night and restored in new server.
Backup of 67GB data takes 1 hour, transfer 1 hour, restore and analyze to 
new server 4 hours. Total 6 hours. So current lag in 6 .. 24 hours.


Goal is to decrease this lag.

4) How are you going to deal with the down server and how do you plan on 
bringing it up again?


VPS hosting company will bring it up again. I will then manually synchronize 
two clusters when users continue to enter data, this is not time critical.


Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-29 Thread Andrus

Hi!

Same warning appears two times. This command execute by pg_restore 
probably causes this (harmless?)  warning:

What warning?


pg_restore: WARNING:  could not determine encoding for locale "et_EE.UTF-8":
codeset is "CPUTF-8"

I cranked up a Windows 7 instance and tried to migrate a Postgres 11 
database from Ubuntu and it failed on the CREATE DATABASE step because of 
this line in the dump file:
CREATE DATABASE redmine WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';


I ran this statemnt it in Windows 10 with Postgres 12 successfully. Result 
was:


WARNING:  could not determine encoding for locale "en_US.UTF-8": codeset is 
"CPUTF-8"
WARNING:  could not determine encoding for locale "en_US.UTF-8": codeset is 
"CPUTF-8"

CREATE DATABASE

Query returned successfully in 1 secs 75 msec.

redmine database was created. I dont understand why it failed in your test.


When I manually changed it in the plain text version of the dump file to:
CREATE DATABASE redmine WITH TEMPLATE = template0 ENCODING = 'UTF8' 
LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United 
States.1252';


I verifed that data was restored using pg_restore without manually changing 
anything.


Andrus.





Re: Hot standby from Debian to Windows

2020-03-29 Thread Andrus

Hi!

Would it not be easier to just set up another Debian server, run binary 
replication


Breaks occurs rarely, 0-2 times per year.
I want try it first.


and put them behind something like pgpool?


Backup server will use single core and minimal RAM. It may be needed 0-2 
times per year.


pgpool should switch to use it for production work only if main server does 
not respond.

I havent found this feature in pgpool documentation (maybe missed).

Andrus. 






Re: Hot standby from Debian to Windows

2020-03-30 Thread Andrus

Hi!

Got it. Just thought it would be easier not to have to deal with cross OS 
issues.

Here is one example:
https://www.pgpool.net/docs/latest/en/html/example-watchdog.html


Hopefully Windows Hyper-V virtual network adapter will not check windows 20 
connection limit.
In this case using Debian+Hyper-V+ binary replication allows to connect more 
than 20 users and may be best solution.


Both have 64-bit OS. Will binary replication work in this case.

Andrus. 






Re: could not determine encoding for locale "et_EE.UTF-8": codeset is "CPUTF-8" in pg_restore

2020-03-30 Thread Andrus

Hi!


Not sure but:
1) I was on Windows 7
2) Using Postgres 11
3) My Windows skills have atrophied, especially with the Windows command 
line.
So was this the same for the database you originally posted about, it 
actually restored it just threw warnings?


Looks like it restored.  I havent checked restored data.

If so I misunderstood the situation and thought the database was not 
loading.


I tried

CREATE DATABASE redmine
   WITH
   ENCODING = 'UTF8'
   LC_COLLATE = 'foo'
   LC_CTYPE = 'bar' template template0

in Linux and in Windows using Postgres 12.2
In Linux it throws error

ERROR:  invalid locale name: "foo"

In Windows it creates database and throws warning only.
Without template template0 clause it throws error in Windows also.

In Linux

CREATE DATABASE redmine WITH TEMPLATE = template0 ENCODING = 'UTF8'
LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United
States.1252';


also throws error

ERROR:  invalid locale name: "English_United States.1252"

So it looks like pg_dump/pg_restore with --create works only from Linux to 
Windows and does not work from Windows to Linux.


I expect that it should work from Windows to Linux also.

Andrus.





Using compression on TCP transfer

2020-03-31 Thread Andrus

Hi!

Databases contain lot of char(n) type fields containing ascii characters.
Most of fields contain less characters than field width.
Changing them to varchar is not easy.

Database is accessed from Debian Postgres 12.2 over internet using psqlodbc 
with

TLS v1.3.
Mostly results seelct commands are sent and results are retrieved.
Clients have 10-20Mbit download speeds, and 5-20 Mbit upload speeds.

Will data compression increase speed ?
If yes, how to implement this ?

Andrus. 






Re: Using compression on TCP transfer

2020-03-31 Thread Andrus
Hi,

>See the section about sslcompression in 
>https://www.postgresql.org/docs/9.2/libpq-connect.html. It should be your 
>answer.

I added 

sslcompression=1

to psqlodbc connection string but log file shows that connection is still 
uncompressed:

LOG:  connection authorized: user=me database=mydb SSL enabled 
(protocol=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384, bits=256, compression=off)

Maybe because according to TLSv1.3 standard, compression is no more allowed or 
psqlodbc does not pass it to pglib.

How to compress ?

Andrus.

Re: Using compression on TCP transfer

2020-03-31 Thread Andrus
Hi!
>Updated doc reference:
>https://www.postgresql.org/docs/12/libpq-connect.html 
>They mention that compression is insecure and disabled by default. Taking this 
>into account, compression will require that both ODBC and PostgreSQL are set 
>>up with compression enabled. I could not figure out quickly whether this 
>requires also recompiling the code...

I added 

Pqopt={sslcompression=1};

to psqlodbc connection string but log file shows that it still not compressed.

I’m using Debian 10 and Postgres 12 installed from postgres repository.
How to check is will it support compression or not.

Andrus.

Re: Using compression on TCP transfer

2020-03-31 Thread Andrus

Hi!


It is a simple ALTER TABLE.


Client is Visual FoxPro application. It sends data with trailing spaces 
sometimes and sometimes not.
In case of varchar field values will appear in database sometimes with 
trailing spaces and sometimes without.
This requires major application re-design which much is more expensive than 
continuing using char fields.



You'd have to use an OpenSSL library with compression support enabled.


Should I change OpenSSL installed from standard repository in Debian server 
or can it changed only for PostgreSql.

How ?


But that will improve speed only if your workload is network bound,
not CPU bound (in which case performance will suffer).


Server has lot of cores. Top shows that CPU usage is small.

Brausers and web servers use compression widely. Apache and IIS enable 
static content compression by default.

Compression should be built in in Postgres.

Andrus. 






Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Andrus

Hi!

After upgrading to Postgres 12   statement

update temprid set
   ContactFirstName =unnest(xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x))::text,
   yhik =unnest(xpath(
'/E-Document/Document/DocumentItem/ItemEntry/BaseUnit/text()',x))::text
from t

throws error

set-returning functions are not allowed in UPDATE

How to fix this ?


Maybe there is simply change which makes this statement work ?
Result should by any value of xpath expression in case if xpath returns 
multiple values

In Postgres 9.1 it worked.

Andrus. 






Re: Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-02 Thread Andrus

Hi!


Simply replace
  SET col = unnest(array_value)
with

  SET col = array_value[1]


I tried

update temprid set
   ContactFirstName =xpath(

'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text

But got error

   ERROR:  syntax error at or near "["
   LINE 3: .../BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text

How to fix ?

Posted also in 


https://stackoverflow.com/questions/60993975/fixing-set-returning-functions-are-not-allowed-in-update

Andrus.




Why there is 30000 rows is sample

2020-04-04 Thread Andrus

Hi!

vacuumdb output:

vacuumdb: vacuuming database "mydb"
INFO:  analyzing "public.mytable"
INFO:  "mytable": scanned 2709 of 2709 pages, containing 10834 live rows and 0 dead rows; 10834 rows in sample, 10834 estimated 
total rows


For tables with more than 3 rows, it shows that there are 3 rows in 
sample.

postgresql.conf does not set  default_statistics_target value.
It contains

#default_statistics_target = 100# range 1-1

So I expect that there should be 100 rows is sample.
Why Postgres uses 3 or number of rows in table for smaller tables ?

Is 3 some magical value, how to control it.

Using Postgres 12 in Debian.

Andrus.





Re: Using compression on TCP transfer

2020-04-04 Thread Andrus

Hi!

In case of varchar field values will appear in database sometimes with 
trailing spaces and sometimes without.
This requires major application re-design which much is more expensive than 
continuing using char fields.

A simple BEFORE INSERT OR UPDATE trigger would take care of that.


Changing char to varchar will break commands where trailing space is used in 
comparison.

For example query

create table test ( test char(10) );
insert into test values ('test');
select * from test where test ='test '; -- note trailing space

does not return data anymore if your recommendation is used:

create table test ( test varchar );
insert into test values ('test');
select * from test where test ='test '; -- note trailing space

In production 'test ' is query parameter coming from application with possible 
trailing space(s).

Adding trigger does not fix this.
How to fix this without re-writing huge number of sql commands?

Andrus.




Re: Using compression on TCP transfer

2020-04-05 Thread Andrus
Hi!
Thank you.

>If you decide that it's without, you could apply a TRIM in a trigger on each 
>INSERT and UPDATE. Then, you could replace the table by a view of the same 
>name >and implement the TRIM on SELECT there. This way, you don't have to 
>touch anything in the application.

How you provide sample code how to create view or othe method test so that my 
select statement returns data.

Currently select in code

create table test ( test varchar );
insert into test values ('test');
select * from test where test ='test '; -- note trailing space


does not return data.

Andrus.

Server with hot standby slave wont start after vacuum

2020-04-05 Thread Andrus

Hi!

Streaming asynchronous binary replication is used with hot standby slave.

To recover disk space

vacuumdb --all --full --skip-locked

is executed in every night is master.

During this vacuumdb stops with error

vacuumdb: error: vacuuming of table "myschema.mytable" in database "mydb" failed: PANIC:  could not write to file 
"pg_wal/xlogtemp.24729": No space left on device

server closed the connection unexpectedly
   This probably means the server terminated abnormally
   before or while processing the request.

And master wont start anymore:

LOG:  server process (PID 24729) was terminated by signal 6: Aborted
DETAIL:  Failed process was running: VACUUM (SKIP_LOCKED, FULL) firma39.rid;
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.

HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.
...
LOG:  all server processes terminated; reinitializing
LOG:  database system was interrupted; last known up at 2020-04-06 01:14:10 EEST
LOG:  database system was not properly shut down; automatic recovery in progress
LOG:  redo starts at 2A0/C414BA68
FATAL:  could not extend file "global/58294678": wrote only 4096 of 8192 bytes 
at block 1728
HINT:  Check free disk space.
CONTEXT:  WAL redo at 2A0/D661D4B0 for XLOG/FPI:
LOG:  startup process (PID 24732) exited with exit code 1
LOG:  aborting startup due to startup process failure
LOG:  database system is shut down

pg_wal contains 2005 files with total size 32 GB and there is no free disk 
space.

hot standby server is connected over 20 Mbit internet.

Maybe vacuum full causes creation of creates huge number files in pg_wal which 
cannot transferred fast over 20Mbit internet.

How to fix this so that master continues to work?
Mabe it is possible to disable creation of wal files by vacuum.

Postgres 12 in Debian is used.

Andrus. 






How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!

Streaming asynchronous binary replication is used with hot standby slave.

If slave stops responing, master server will create files in pg_wal directory.
If disk becomes full, master server crashes also.

How to avoid this ?

If disk is nearly full, master should stop additional files creation (and maybe 
stop or disable replication slot).
Postgres 12 in Debian 10 is used.

Andrus.




Re: How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!

Thank you.


That's why you use monitoring.
Never use replication slots without monitoring replication (or at least the
disk space on the primary).


How to implement this automatically, without human interaction required ?

"superuser_reserved_connections" setting exists.
How about implementing"data_reserved_space" setting ?

How to create procedure in server or maybe cron scipt which stops replication 
if disk becomes nearly full ?

How to force to stop replication slot in master if it is in use ?

Andrus.




Re: How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!

Thank you.


When you use replication slots, it is very important to put in place a
monitoring solution to check if too much WAL is retained, and note
that there is nothing able to do that natively in core Postgres.
There are however multiple ways to solve this problem, like a
background worker (for the slot monitoring as well as optionally
killing and/or dropping), a simple cron job or even check_postgres.


Where to find some sample how to implement this ?

I read from docs that slot cannot dropped if it is in use.
How to stop replication in this case.

Andrus.




Re: How to prevent master server crash if hot standby stops

2020-04-06 Thread Andrus

Hi!


If you prefer replication to fail silently, don't use replication
slots.  Use "wal_keep_segments" instead.


I desided to give 1 GB to wal. So I added

wal_keep_segments=60

After some time Postgres created 80 files with total size 1.3GB. 


How to fix this so that no more than 1 GB of disk space is used ?
How to get information how may wal files are yet not processed by slave ?
How to delete processed wal files so that 1 GB of disk space can used for some 
other purposes ?

/var/lib/postgresql/12/main/pg_wal# ls
000102A20072  000102A20083  000102A20094  
000102A200A5  000102A200B6
000102A20073  000102A20084  000102A20095  
000102A200A6  000102A200B7
000102A20074  000102A20085  000102A20096  
000102A200A7  000102A200B8
000102A20075  000102A20086  000102A20097  
000102A200A8  000102A200B9
000102A20076  000102A20087  000102A20098  
000102A200A9  000102A200BA
000102A20077  000102A20088  000102A20099  
000102A200AA  000102A200BB
000102A20078  000102A20089  000102A2009A  
000102A200AB  000102A200BC
000102A20079  000102A2008A  000102A2009B  
000102A200AC  000102A200BD
000102A2007A  000102A2008B  000102A2009C  
000102A200AD  000102A200BE
000102A2007B  000102A2008C  000102A2009D  
000102A200AE  000102A200BF
000102A2007C  000102A2008D  000102A2009E  
000102A200AF  000102A200C0
000102A2007D  000102A2008E  000102A2009F  
000102A200B0  000102A200C1
000102A2007E  000102A2008F  000102A200A0  
000102A200B1  archive_status
000102A2007F  000102A20090  000102A200A1  
000102A200B2
000102A20080  000102A20091  000102A200A2  
000102A200B3
000102A20081  000102A20092  000102A200A3  
000102A200B4
000102A20082  000102A20093  000102A200A4  
000102A200B5


Andrus.





Re: How to prevent master server crash if hot standby stops

2020-04-07 Thread Andrus

Hi!


About your third question, you *never* manually mess with the files in pg_wal.
The server does that.


Is it OK  to stop server, delete all files in pg_wal directory and re-start 
server ?

Or should default value put back and wait until server frees 1 GB disk space ?

Andrus.




Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-07 Thread Andrus

Hi!

Query returns no rows but its execution time in Postgres 12 depends on the 
column expression.

Query with  column expression

coalesce( (select sum(taitmata) from rid join dok using (dokumnr)
where toode=toode.toode and doktyyp='T' and not dok.taidetud and dok.kinnitatud 
and
taitmata is not null),0)

takes  666 ms :

explain analyze select
coalesce( (select sum(taitmata) from rid join dok using (dokumnr)
where toode=toode.toode and doktyyp='T' and not dok.taidetud and dok.kinnitatud 
and
taitmata is not null),0)
from toode
   where toode.ribakood='testmiin'::text
  or toode.nimetus ilike '%'||'testmiin'||'%' escape '!'
  or toode.toode ilike '%'||'testmiin'||'%' escape '!'
  or toode.markused ilike '%'||'testmiin'||'%' escape '!'
  or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin')
or to_tsvector('english',toode.engnimetus) @@  
plainto_tsquery('testmiin')

"Gather  (cost=1000.00..505930.82 rows=153 width=32) (actual time=661.419..661.476 
rows=0 loops=1)"
"  Workers Planned: 1"
"  Workers Launched: 1"
"  ->  Parallel Seq Scan on toode  (cost=0.00..10015.31 rows=90 width=21) (actual 
time=574.922..574.922 rows=0 loops=2)"
"Filter: (((ribakood)::text = 'testmiin'::text) OR (nimetus ~~* '%testmiin%'::text) OR (toode ~~* '%testmiin%'::text) OR 
(markused ~~* '%testmiin%'::text) OR (to_tsvector('english'::regconfig, (nimetus)::text) @@ plainto_tsquery('testmiin'::text)) OR 
(to_tsvector('english'::regconfig, (engnimetus)::text) @@ plainto_tsquery('testmiin'::text)))"

"Rows Removed by Filter: 7202"
"  SubPlan 1"
"->  Aggregate  (cost=3234.63..3234.64 rows=1 width=32) (never executed)"
"  ->  Nested Loop  (cost=11.26..3234.52 rows=43 width=3) (never 
executed)"
"->  Bitmap Heap Scan on rid  (cost=10.84..1191.72 rows=270 width=7) 
(never executed)"
"  Recheck Cond: (toode = toode.toode)"
"  Filter: (taitmata IS NOT NULL)"
"  ->  Bitmap Index Scan on rid_toode_pattern_idx  
(cost=0.00..10.77 rows=312 width=0) (never executed)"
"Index Cond: (toode = toode.toode)"
"->  Index Scan using dok_pkey on dok  (cost=0.42..7.57 rows=1 
width=4) (never executed)"
"  Index Cond: (dokumnr = rid.dokumnr)"
"  Filter: ((NOT taidetud) AND kinnitatud AND (doktyyp = 
'T'::bpchar))"
"Planning Time: 2.102 ms"
"JIT:"
"  Functions: 24"
"  Options: Inlining true, Optimization true, Expressions true, Deforming true"
"  Timing: Generation 5.592 ms, Inlining 95.077 ms, Optimization 463.742 ms, 
Emission 277.062 ms, Total 841.473 ms"
"Execution Time: 666.007 ms"


Same query with sime column expression

1

run 3.6 times faster:


explain analyze select 1
from toode
   where toode.ribakood='testmiin'::text
  or toode.nimetus ilike '%'||'testmiin'||'%' escape '!'
  or toode.toode ilike '%'||'testmiin'||'%' escape '!'
  or toode.markused ilike '%'||'testmiin'||'%' escape '!'
  or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin')
or to_tsvector('english',toode.engnimetus) @@
plainto_tsquery('testmiin')

"Gather  (cost=1000.00..11030.61 rows=153 width=4) (actual time=182.414..185.648 
rows=0 loops=1)"
"  Workers Planned: 1"
"  Workers Launched: 1"
"  ->  Parallel Seq Scan on toode  (cost=0.00..10015.31 rows=90 width=4) (actual 
time=155.338..155.339 rows=0 loops=2)"
"    Filter: (((ribakood)::text = 'testmiin'::text) OR (nimetus ~~* '%testmiin%'::text) OR (toode ~~* '%testmiin%'::text) OR 
(markused ~~* '%testmiin%'::text) OR (to_tsvector('english'::regconfig, (nimetus)::text) @@ plainto_tsquery('testmiin'::text)) OR 
(to_tsvector('english'::regconfig, (engnimetus)::text) @@ plainto_tsquery('testmiin'::text)))"

"Rows Removed by Filter: 7202"
"Planning Time: 1.729 ms"
"Execution Time: 185.674 ms"

If there are more column expressions, perfomance difference is bigger.
rid  table used in column expression contains 1.8 million of rows.
Performance degradation probably occured if upgraded from Postgres 9.1 to 
Postgres 12

Since no data is returned query perfomance should be same.
How to fix it ?

Andrus. 





Re: Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-07 Thread Andrus
Hi!
>on your query there is too slow JIT. Is strange how much. So the best way is 
>disable JIT probably 

>set jit to off;
>or same field in postgresql.conf

Thank you.

set jit to off  

makes select fast.
I have encountered this issue only in this query in one database

There is  variation of this query running with diferent data in different 
database in same Debian 10 server. It works fast.
Should I disable jit only for this query or in postgresql.conf permanently?

Andrus.

Re: Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-07 Thread Andrus
Hi!

>It is really strange why it is too slow. Can you prepare test case? Looks like 
>bug (maybe not Postgres's bug)

Testcase is below.
With  jit on it takes 3.3 sec  and with jit off 1.5 sec.

Andrus.

create temp table toode ( toode char(20), ribakood char(20), 
 nimetus char(50), markused char(50), engnimetus 
char(50) ) on commit drop;
insert into toode (toode) select generate_series(1,14400);
CREATE INDEX ON toode USING gin
(to_tsvector('english'::regconfig, nimetus::text));
CREATE UNIQUE INDEXON toode (ribakood )
WHERE ribakood IS NOT NULL AND btrim(ribakood::text) <> ''::text;
CREATE INDEX ON toode (toode);
CREATE UNIQUE INDEXON toode (upper(toode::text) );
create temp table dok ( dokumnr serial  primary key ) on commit drop;
insert into dok  select generate_series(1,14400);

create temp table rid ( dokumnr int, taitmata numeric, toode char(20)  ) on 
commit drop;
insert into rid  select generate_series(1,144);

CREATE INDEX rid_dokumnr_idxON rid (dokumnr );
-- jit on: 3.3 sec  jit off: 1.5 sec
set jit to off;
select 
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
  (select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) 

from toode 
where toode.ribakood='testmiin'::text
   or toode.nimetus ilike '%'||'testmiin'||'%' escape '!' 
   or toode.toode ilike '%'||'testmiin'||'%' escape '!' 
   or toode.markused ilike '%'||'testmiin'||'%' escape '!'

or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin') 
 or to_tsvector('english',toode.engnimetus) @@
 plainto_tsquery('testmiin')  

Re: Performance degradation if query returns no rows and column expression is used after upgrading to 12

2020-04-08 Thread Andrus
Hi


>this query is little bit strange - it has pretty big cost, and because returns 
>nothing, then it's pretty fast against cost.  
>there is 18 subqueries, but jit_above_cost is ralated just to one query. This 
>is probably worst case for JIT.
>This query is pretty slow and expensive (and then the cost of JIT is minimal), 
>but when the query returns some rows, then JIT start to helps. 
>So maybe if you find some queries that returns some rows, then the speed will 
>be better with active JIT than with disabled JIT.

Below is modified testcase which returns one row.
In Debian 10 VPS with jit on it takes 2.5 sec  and with jit off  0.4 s

jit is still many times slower in Debian even if data is returned.

In Windows 10 workstation there is no difference.

>The situation when the query returns no rows, then JIT is significant 
>bottleneck - but it looks like corner case.

Both testcases simulate search queries in typical e-shop. 
Users can use any search term and expect that query returns fast.

Modified testcase which returns one row:

create temp table toode ( toode char(20) primary key, ribakood char(20), 
 nimetus char(50), markused char(50), engnimetus char(50) ) on 
commit drop;
insert into toode (toode) select generate_series(1,14400);
insert into toode (toode,nimetus) 
select 'TEST'|| generate_series, 'This is testmiin item'
  from generate_series(1,1);

CREATE INDEX ON toode USING gin(to_tsvector('english'::regconfig, 
nimetus::text));
CREATE UNIQUE INDEXON toode (ribakood )
WHERE ribakood IS NOT NULL AND btrim(ribakood::text) <> ''::text;
create temp table dok ( dokumnr serial  primary key ) on commit drop;
insert into dok  select generate_series(1,1);

create temp table rid (id serial primary key, 
   dokumnr int references dok, taitmata numeric, toode 
char(20) references toode ) on commit drop;
insert into rid  (dokumnr,toode)
select generate_series % 1+1, 1
from  generate_series(1,1);

CREATE INDEX ON rid(dokumnr );
CREATE INDEX ON rid(toode);
-- jit on: 2.5 sec  jit off:  0.4 s
set jit to off;
select 
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
  (select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) ,
(select sum(taitmata) from rid join dok using (dokumnr)  where 
toode=toode.toode ) 

from toode 
where toode.ribakood='testmiin'::text
   or toode.nimetus ilike '%'||'testmiin'||'%' escape '!' 
   or toode.toode ilike '%'||'testmiin'||'%' escape '!' 
   or toode.markused ilike '%'||'testmiin'||'%' escape '!'

or to_tsvector('english',toode.nimetus) @@ plainto_tsquery('testmiin') 
 or to_tsvector('english',toode.engnimetus) @@
 plainto_tsquery('testmiin')  

Andrus.

Huge number of pg_temp and pg_toast_temp schemas

2020-04-10 Thread Andrus

Hi!

Postgres 12 database contains huge number of pg_temp and pg_toast_temp schemas 
named

pg_temp_1 ..  pg_temp_126

and

pg_toast_temp_1  .. pg_toast_temp_126

There are total 2 * 126 = 252  unnessecary schemas in one database.
Those schemas seems not contain any objects.

Other databases in cluster similar schemas but in smaller amount.

How to delete them automatically and prevent appear in future?

Cluster stopped  since disk decomes full and streaming async replication is 
used. Maybe one of those or some other factor created or
continues to create those schemas. Postgres 12.2 in Debian 10 is used.

Andrus. 


Re: Huge number of pg_temp and pg_toast_temp schemas

2020-04-10 Thread Andrus

Hi!

Thank you.


This is an unsurprising state, if you routinely have a hundred-plus
connections that sometimes create temp tables.  Each such session
needs schemas to keep its temp tables in.  The temp tables are
deleted at session exit, but we don't bother to remove the schema
entries, figuring that they'll probably be needed again later, and
a couple of rows in pg_namespace is negligible overhead anyway.


How to hide temp schemas from pgAdmin 4 tree?
For such large number of temporary schemas  pgAdmin schema tree view becomes 
polluted and makes database management difficult.
I turned most of Nodes options in PgAdmin options off but pgadmin still shows 
them. It shows also pg_toast schema.

Andrus.


Forcibly disconnect users from one database

2020-04-10 Thread Andrus

Hi!

Postgres 12 does not allow to drop database if it is in use. DROP DATABASE 
throws error:

postgres@template1 ERROR:  database "mydb" is being accessed by other users
postgres@template1 DETAIL:  There is 1 other session using the database.
postgres@template1 STATEMENT:  DROP DATABASE "mydb"

How to disconnect all users from database?
Using 

/etc/init.d/postgresql restart  


disconnects users from all databases. How to disconnect users from one database 
only ?
I looked into pgAdmin 4 but havent found such command. 


pgAdmin 3 had buttons to select kill processes by database name but those 
disappeared in pgAdmin 4.

How to find which users are using database and forcibly disconnect them ?

Andrus.




Which commands are guaranteed to drop role

2020-04-11 Thread Andrus

Hi!

Database "mydb" is owned by role "mydb_owner". 

User "currentuser"  tries to delete role "roletodelete" from this database using 


revoke all on all tables in schema public,firma1 from "roletodelete" cascade;
revoke all on all sequences in schema public,firma1 from "roletodelete" cascade;
revoke all on database mydb from "roletodelete" cascade;
revoke all on all functions in schema public,firma1 from "roletodelete" cascade;
revoke all on schema public,firma1 from "roletodelete" cascade;
revoke mydb_owner from "roletodelete" cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1 revoke all ON TABLES from 
"roletodelete";
GRANT "roletodelete" TO "currentuser";
reassign owned by "roletodelete" to mydb_owner;
drop owned by "roletodelete";
drop role "roletodelete";

But got error

ERROR: role "roletodelete" cannot be dropped because some objects depend on it
DETAIL: privileges for schema public;

How to create script which  is guaranteed to delete role ?

This script already contains:

revoke all on schema public,firma1 from "roletodelete" cascade;

Why postgres complains that privileges for schema public depend on this role if 
they are revoked ?
How to fix this?

Andrus 





Re: Which commands are guaranteed to drop role

2020-04-12 Thread Andrus

Hi!


You cannot write such a script, but you will have to REVOKE and change ownership

and ALTER DEFAULT PRIVILEGES until no more dependencies on the role exist.

I ran script as superuser. In this case more detailed information appears:

ERROR: role "roletodelete" cannot be dropped because some objects depend on it

DETAIL:  privileges for default privileges on new relations belonging to role 
currentuser in schema public
privileges for default privileges on new relations belonging to role 
currentuser schema firma1

I changed script to 


do $$
   DECLARE r record;
   begin
 for r in select * from pg_views where schemaname IN ('public','firma1')
 loop
   execute 'revoke all on ' || quote_ident(r.schemaname) ||'.'|| 
quote_ident(r.viewname) || ' from roletodelete cascade';
 end loop;
   end $$;
GRANT roletodelete TO currentuser;
revoke all on all tables in schema public,firma1 from roletodelete cascade;
revoke all on all sequences in schema public,firma1  from roletodelete cascade;
revoke all on all functions in schema public,firma1 from roletodelete cascade;
revoke all on schema public,firma1 from roletodelete cascade;
REVOKE CONNECT ON DATABASE mydb from roletodelete cascade;
revoke all on database mydb from roletodelete cascade;

revoke mydb_owner  from roletodelete  cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1 revoke all ON TABLES from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  revoke all ON sequences from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  revoke all ON functions from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES IN SCHEMA public,firma1  revoke all ON types from 
roletodelete cascade;
ALTER DEFAULT PRIVILEGES revoke all ON schemas from roletodelete cascade;

REVOKE USAGE ON SCHEMA public,firma1 FROM roletodelete cascade;
reassign owned by roletodelete to mydb_owner;
drop owned by roletodelete;
drop role roletodelete;

In this case it deleted user if was run under superuser postgres.

Non-superuser still cannot delete user using this script. How to allow 
non-superuser to dete user also ?


That is why you are well advised not to grant permissions to a role that you
plan to drop.


Role represents person. ODBC connection is used. Person rights should be 
restricted in database in this case.

How to simplify this script so that user will always deleted ?
Maybe some parts of script are not necessary.

Why postgres does not have simple command like 


drop role roletodelete reassign owned to currentuser cascade

but requires 25-line script for this.
Some parts of this script will not work in Postgres 9.0 probably. How to add 9.0+ support for it. 
Revoking privileges from view in not required in earlier releases.


Andrus.





How to restore database to previous state

2020-05-10 Thread Andrus

Hi!

Postgres 12 database is runing on Debian

max_wal_size = 5GB

is specified in postgres.conf  file

How to restore database to some previous state, e.q. before important data was 
accidently deleted.

Binary asynchrunous replication with slot is also used.

I looks like it is possible to create standby server which replicates changes using 24 hour delay and data directory backups in 
every night.

However this requires separate delayed standby server .
Is there simpler solution, e.q reverse playback of wal segments starting from 
current data directory contents.

Andrus. 






Hot and PITR backups in same server

2020-05-17 Thread Andrus

Hi!

How to implement hot standby and PITR recovery possibility in same backup 
server.

Plan is:

1. Create base backup using

pg_basebackup --checkpoint=fast --verbose --progress --write-recovery-conf -D 
/var/lib/postgresql/12/standby

2. Create backup copy of   /var/lib/postgresql/12/standby  directory for PITR

3. set max_wal_size in postgresql.conf   to   5 TB

4. Start backup server for hot standby backups.

If data from earlier point of time is required:

1. Stop backup server
2. Replace its data dirctory from of initial data directory contents created in 
previous p.2
4. Copy pg_wal contents from hot standby pg_wal directory to initial pg_wal 
directory in base backup
5. Specify recovery time in postgresql.conf and start backup server to recover 
to this point of time.

The most suspicius point is p.4 : copying manually pg_wal contents  from hot 
data to base backup data.

It this OK ? Or is some better way to implement hot stadby and PITR possibility 
in same computer ?
Postgres 12 in Debian is used.

Andrus.




Re: Hot and PITR backups in same server

2020-05-18 Thread Andrus

Hi!


This is confused or at least confusing.
- "max_wal_size" of 5TB is clearly insane.
- I don't understand what you mean by "Start backup server for hot standby 
backups".
Do I get it right that you want to copy a streaming replication standby 
server's data

directory to perform PITR?

I want to copy only pg_wal directory contents from this.

After pg_basebackup has finished, copy of its data directory is saved for 
possible PITR.
Its pg_wal contents will be replaced with current pg_wal directory contents.

After that recovery time will set it postgres.conf and separate server in 5433 
is used to preform PITR using this data directory.


That doesn't see like a good plan, because the standby
usually won't be much behind the primary server, and you can only recover to a 
later
point in time.


I will get data from copy taken when hot backup server was created and replace its pg_wal directory contents from pg_wal in current 
backup server pg_wal directory.

Since max pg wal size is big it should contain all WAL segments from time where 
base backup was taken.


If you care to elaborate, perhaps the question can be answered.


Currently binary async hot standby backup server is working OK and replicates 
main sever database almost immediately.

However if important data is deleted in same day, it cannot recovered since hot standby server database has replicated same data as 
in master server.

Master server disk space is limited. It cannot  keep large number of wal 
segments.
Standby server has lot of free disk space.

I'm looking for a way to collect data for PITR recovery (WAL segments) in 
standby server.

I have initial base backup of data directory created using pg_basebackup.
All WAL segments from master server after pg_basebackup should saved in backup server and should be used for PITR recovery when 
needed.


How to use hot standby server for this ?

Or should separate cluster in 5433 port created and pg_rewind or something 
other used for PITR.

Andrus. 






Re: Hot and PITR backups in same server

2020-05-19 Thread Andrus

Hi!

Thank you.


As Laurentz said, even if you set it to 5TB, no WAL files older than
the files needed by the last checkpoint don't remain. If you don't
need a hot-standby, you can use pg_receivewal to save WAL files on the
"standby" server.  If you need the hot-standby, WAL files should be
archived.


So pg_receivewal  should running in parallel with hot standy server to receive 
wal files to separate directory.

Will each wal file transferred two times in this case? One time by hot standby 
server and second time by pg_receivewal.
Main server if priced by amount of bandwidth.
How to receive each wal file only once ? This would allow to decrease network 
bandwidth and thus cost two times.

pg_receivewal has compression option. Will this compress WAL files before 
transfer over network ?


If so, what you need seems to be pg_receivewal, not a full-fledged
server.


For hot standby and PITR in same server the following steps are required:

1. Create base backup using pg_basebackup
2. Create copy of base backup for PITR
3. Start hot standby server using data directory created in p.1
4. Run pg_receiceval as background process to save wal files to backup created 
in p.2

If PITR is required, specify recovery time in base backup created in p.2 in 
postgresql.conf
Start second postgresql server instance in port 5433 which uses this backup for 
recovery.

Is this OK or should something changed?

How to run pg_receivewal in background ? Is there some option which creates such service so will automatically restart if server is 
restarted ?


How to allow main server to keep sufficient number of WAL segments ?
Replication slot cannot used: if backup server stops replication_slot causes main server to fill disk space with untransferred WAL 
files.

After  that main server will also stop with "no space left on device" error.

Or is there some option like to reserve some disk space or limit wal size so 
that main server can continue on backup server crash.

Andrus. 






How to recover from compressed wal archieve in windows

2020-05-20 Thread Andrus
In windows 10 


pg_receivewal --directory="d:\wallog" --verbose --compress=9

is used to archieve WAL.

This creates .gz files

For restore

restore_command='copy "D:\\wallog\\%f" "%p"' 


is used.
Restore shows "file not found" errors in console. Thi sis probably because %f 
argument is WAL file name without extension.
How to use compressed WAL files for WAL archieve and restore in windows ?

Andrus.





Re: How to recover from compressed wal archieve in windows

2020-05-20 Thread Andrus

Hi!


I'm guessing:
restore_command='copy "D:\\wallog\\%f.gz" "%p"'
will get you the file.
The next problem is that I'm pretty sure a WAL file with *.gz extension 
will not be able to be processed directly by the server. So you are 
going to have to uncompress it at some point before it gets restored.


How to decompress it automatically in windows ?
In Linux 


restore_command = 'gunzip < "archived_wal/%f" > "%p"'

maybe works.

Will wal_compression=on will produce compressed wal files to additional 
compression is not needed?

Andrus.




Re: How to recover from compressed wal archieve in windows

2020-05-20 Thread Andrus

Hi!

Will wal_compression=on will produce compressed wal files to additional 
compression is not needed?

Yes. Not sure how it will play with the streaming that pg_receivewal does.


I looked into WAL files created with wal_compression=on in pg_wal directory.
They still contain lot of ascii data from database which is not packed.
It looks like file WAL files content is not compressed.

Maybe the best thing is to back up a bit and let us know what it is you 
are trying to achieve?


Trying to add PITR possibility from hot standby server which uses async binary 
streaming replication without named slot.
Server creates 1 GB WAL files per day.
pg_receivewal with --compress=9 produces 3..5MB WAL files instead of 16MB
Trying to save disk space. 
I set windows directory attribute to "compressed" in wal archieve directory. Transparent compression is then applied by OS.


However when WAL files are copied to other device, copy is perfomed probably in 
uncompressed form in windows.
How to set compressed attribute to pg_wal directory in Linux ext4 file system ?

Andrus.




Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!

In windows pg_basebackup was used to create base backup from Linux server.
baas column data type is character(8)

In Linux server  query 


select *  from firma1.desktop where baas='_LOGIFAI'

returns 16 rows.

Windows server this query returns 0 rows.

In Windows server same query using like

select *  from firma1.desktop where baas like '_LOGIFAI'

returns properly 16 rows. 


Maybe this is because database locale is not known in windows:

CREATE DATABASE sba
   WITH 
   OWNER = sba_owner

   ENCODING = 'UTF8'
   LC_COLLATE = 'et_EE.UTF-8'
   LC_CTYPE = 'et_EE.UTF-8'
   TABLESPACE = pg_default
   CONNECTION LIMIT = -1;

Correct encoding for windows should be

LC_COLLATE = 'Estonian_Estonia.1257'
LC_CTYPE = 'Estonian_Estonia.1257'

IF so how to to fix windows cluster so that query returns proper result in 
windows also?
Database in Windows is in read-only (recovery) mode so it cannot changed.
Postgres 12 is used.

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Are you referring to two different instances of Postgres on Windows?


No. 
Main server is in Linux and backup server is in windows.


Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


The LIKE query probably doesn't use an index and thus finds the relevant
data via sequential scan and equality checks on each record.



Yeah, exactly.  An equality condition will use a btree index if
available.  LIKE, however, sees the "_" as a wildcard so it cannot
use an index and resorts to a seqscan --- which will work fine.
It's just index searches (and index-based sorts) that are broken.
Of course, if there isn't an index on the column in question
then this theory falls to the ground.


There is composite index on baas column

CREATE TABLE public.desktop
(
   id integer NOT NULL DEFAULT nextval('desktop_id_seq'::regclass),
   recordtype character(5) COLLATE pg_catalog."default" NOT NULL,
   klass character(1) COLLATE pg_catalog."default",
   baas character(8) COLLATE pg_catalog."default" NOT NULL,
   liigid character(1) COLLATE pg_catalog."default" NOT NULL DEFAULT ''::bpchar,
   jrk numeric(4,0) NOT NULL DEFAULT 0,
...
   CONSTRAINT desktop_pkey PRIMARY KEY (id),
   CONSTRAINT desktop_baas_not_empty CHECK (baas <> ''::bpchar),
   CONSTRAINT desktop_id_check CHECK (id > 0),
   CONSTRAINT desktop_recordtype_check CHECK (recordtype = 'Aken'::bpchar OR 
recordtype = 'Veerg'::bpchar)
)

TABLESPACE pg_default;
CREATE INDEX desktop_baas_liigid_idx
   ON public.desktop USING btree
   (baas COLLATE pg_catalog."default" ASC NULLS LAST, liigid COLLATE 
pg_catalog."default" ASC NULLS LAST)
   TABLESPACE pg_default;

Maybe it is possible to force postgres in windows to use the same locale as in 
Linux. Locales are actually the same.

Andrus.





Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


No, what it sounds like is the OP tried to physically replicate a
database on another platform with completely different sorting rules.


The sorting rules for this locale must be the same in both platforms.
Only locale names are different. It looks like windows server does not 
recognize Linux locale name.


Which means all his text indexes are corrupt according to the
destination platform's sorting rules, which easily explains the
observed misbehavior (ie, index searches not finding the expected rows).


Lot of queries seems working properly.

REINDEX would fix it. 


REINDEX throws error

ERROR:  cannot execute REINDEX during recovery
SQL state: 25006


But the major point here is you can't just ignore
a collation mismatch, which in turn implies that you can't do physical
replication from Linux to Windows, or vice versa (and most other
cross-platform cases are just as dangerous).


Database is used in recovery mode to find proper recovery point and to get data 
from it in this point.
Locales are actually same. In windows Postgres does not recognize Linux locale 
name.


Database in Windows is in read-only (recovery) mode so it cannot changed.

Then you might as well just rm -rf it (or whatever the equivalent Windows
incantation is).  On Windows, that database is broken and useless.


Most queries seems to work. 
Database should examined to get accidently deleted data from it.


Is making it read-write and index only solution or can it fixed in read-only 
database also, e-q forcing same local in postgres.conf

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Main server is in Linux and backup server is in windows.

This is not a supported setup if you want to run a physical backup.
Your backup and your primary need to be the same - software and hardware.  
Consider anything that is working to be a false
negative – assume >something will break or simply give incorrect results.


This base backup should used for recovery. Taking new base backup in Linux does 
not allow to recover to earlier date.
Both servers have Intel 64 bit CPUs.
I understand that only issue is the index structure and that REINDEX will fix 
this.
What other issues may occur ?

Will pg_dump/pg_restore in Windows server fix all issues.

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-21 Thread Andrus

Hi!


Database in Windows is in read-only (recovery) mode so it cannot changed.

Then you might as well just rm -rf it (or whatever the equivalent Windows
incantation is).  On Windows, that database is broken and useless.


Backup in created in Windows from Linux server using pg_receivewal and 
pg_basebackup .
Can this backup used for PITR in Linux ?

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-22 Thread Andrus

Hi!


The sorting rules for this locale must be the same in both platforms.
Only locale names are different.

I think they are less alike than you hoped, because if they were alike,
you wouldn't be seeing this problem.
Possibly you could try running contrib/amcheck on the index in question
and see if it reports any issues.


I tried and it reports error

ERROR:  item order invariant violated for index "desktop_baas_liigid_idx"
DETAIL:  Lower index tid=(3,15) (points to index tid=(16,4098)) higher index tid=(3,16) (points to index tid=(17,4098)) page 
lsn=292/630C0CE8.

SQL state: XX002

Andrus.





Re: Query returns no rows in pg_basebackup cluster

2020-05-22 Thread Andrus

Hi!


ERROR:  item order invariant violated for index "desktop_baas_liigid_idx"
DETAIL:  Lower index tid=(3,15) (points to index tid=(16,4098)) higher index tid=(3,16) (points to index tid=(17,4098)) page 
lsn=292/630C0CE8.

SQL state: XX002

Uh huh ... and I'll bet the same test on the source server is just fine?
I don't find it surprising in the least that different platforms have
different ideas on fine points like how to sort a leading underscore.
Those things just aren't that well standardized.


This column is not used for locale specific data.

Running 

alter table desktop alter  baas type char(8) collate ucs_basic 

fixes the issue. 
Is this fix reasonable ?

What other issues may occur ?

Can base backup created in windows using pg_basecakup used in Linux without 
such fix?

Andrus.




Re: Query returns no rows in pg_basebackup cluster

2020-05-24 Thread Andrus

Hi!


No.  Physical copies need to be based on the same platform.

Does the O/S that the client software runs on really affect this?

To the extent that the O/S determines text sort order, yes; see thread.
The short answer here is that we aren't going to support such cases.
If you try to replicate across platforms, and it works, you're in luck.
If it doesn't work, you get to keep both pieces; we will not accept
that as a bug.


In 2017 Peter wrote that  ICU-based collations will offered alongside the 
libc-based collations (1)
Currently it still requires re-compilation of Postgres for all binary 
replication platforms.
Maybe ICU locale will selected during installation automatically in Postgres 13 . Using same ICU locale in all replication platforms 
will hopefully fix the issue.


Currently option is to use ucs_basic as default collation when creating cluster.

(1) https://www.2ndquadrant.com/en/blog/icu-support-postgresql-10/

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-24 Thread Andrus

Hi!


Backup in created in Windows from Linux server using pg_receivewal and 
pg_basebackup .
Can this backup used for PITR in Linux ?

No.  Physical copies need to be based on the same platform.  If you
wish to replicate a cluster without any platform, architecture or even
not-too-many major version constraints, there is also logical
replication available since v10.


Will logical replication also allow two modes:
 1. PITR recovery can used if needed
 2. Hot standby: User databases in both clusters contain same data.

How to set logical replication for all user databases in cluster so that when new database is added or new tables are added to 
database they will start replicate automatically ?

Will it require more powerful backup server to replay main server sql stream 
from different databases.

Andrus. 






Re: Query returns no rows in pg_basebackup cluster

2020-05-25 Thread Andrus

Hi!


How to set logical replication for all user databases in cluster so that
when new database is added or new tables are added to database they will
start replicate automatically ?

I think that it would be good if you spend some time reading the
documentation on this stuff, particularly the part about restrictions,
to understand the use cases where that can become useful:
https://www.postgresql.org/docs/devel/logical-replication.html


Thank you.
I read it and havent found any reference to PITR recovery.
For PITR recovery it should probably save sql statements to files and allow to 
specify recovery target time
for applying sql statements to base backup.

Is PITR recovery supported only using binary WAL files ?

Other limits can probably be solved.

Andrus.




How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!

pg_basebackup takes 8 hours.
After it is finished,  replication slave does not start:

LOG:  consistent recovery state reached at 2DE/985A5BE0
LOG:  database system is ready to accept read only connections
LOG:  started streaming WAL from primary at 2DE/9900 on timeline 1
replikaator@[unknown] LOG:  received replication command: SHOW 
data_directory_mode
replikaator@[unknown] LOG:  received replication command: IDENTIFY_SYSTEM
replikaator@[unknown] LOG:  received replication command: START_REPLICATION 
2CF/E900 TIMELIN
replikaator@[unknown] ERROR:  requested WAL segment 000102CF00E9 
has already been re
replikaator@[unknown] LOG:  received replication command: SHOW 
data_directory_mode
replikaator@[unknown] LOG:  received replication command: IDENTIFY_SYSTEM
replikaator@[unknown] LOG:  received replication command: START_REPLICATION 
2CF/E900 TIMELIN
replikaator@[unknown] ERROR:  requested WAL segment 000102CF00E9 
has already been removed
...

i tried it again and same error occured.

How to force replication to start?

I increased wal parameters in master to

wal_compression=on
max_wal_size = 5GB
min_wal_size = 4GB # was 80MB
wal_keep_segments= 360 # was 180

Will this allow replication to start after pg_basebackup ?

According to doc min_wal_size and wal_keep_segments both keep the minimum 
number of wal segments for replication.
Why those parameters are duplicated?

Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


On 31. May, 2020, at 21:47, Andrus  wrote:
replikaator@[unknown] ERROR:  requested WAL segment 000102CF00E9 
has already been removed
the message says it all. You need to copy the WAL file 000102CF00E9 and newer to the replica's pg_wal directory because 
it has been removed >already on the master site.

Obviously, you can only do that if the files have been archived.


wal files are not archieved.


Otherwise, you'd have to fully reinitiate the replica.


I have tried to re-initiate replica serveral times in low-use time but this 
error occurs again.


Replication will start again as soon as the requested WALs are copied over to 
the replica.
Use a replication slot to avoid this situation. If you use a replication slot, the master will only remove WAL files which are not 
needed by any one replica.


If named replication slot is used commands like

vacuumdb --all --full

will cause main server crash due to disk space limit. pg_wal directory will 
occupy free disk space. After that main server stops.

I tried using wal_keep_segments =180
Will setting wal_keep_segments to higher value allw replication start after 
pg_basebackup ?

Andrus. 






Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


I'm guessing are looking for:
https://www.postgresql.org/docs/12/runtime-config-replication.html
26.2.6. Replication Slots
Replication slots provide an automated way to ensure that the master 
does not remove WAL segments until they have been received by all 
standbys, and that the master does not remove rows which could cause a 
recovery conflict even when the standby is disconnected.


Using replication slot can cause pg_wal directoy to occupy all free disk space 
and after that server stop respondig.


This is spelled out here:
https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION
"If you use streaming replication without file-based continuous 
archiving, the server might recycle old WAL segments before the standby 
has received them. If this occurs, the standby will need to be 
reinitialized from a new base backup. You can avoid this by setting 
wal_keep_segments to a value large enough to ensure that WAL segments 
are not recycled too early, or by configuring a replication slot for the 
standby. 


Will wal_keep_segments  keep segments also if named replication slot is lot 
used ?

Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


In addition to my most recent questions:
What are you trying to achieve?


I want to create  hot standby async server using 


/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainold
pg_basebackup --write-recovery-conf -D /var/lib/postgresql/12/main
chmod --recursive --verbose 0700 /var/lib/postgresql/12/main
chown -Rv postgres:postgres /var/lib/postgresql/12/main
/etc/init.d/postgresql start


In other words why do a pg_basebackup if you have a standby receiving WALs?


I dont receive WALs.

Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!

Will wal_keep_segments  keep segments also if named replication slot is 
lot used ?
Well if you are using a replication slot there is no point in using 
wal_keep_segments. Slots where created in, part at least, so you did not 
have to guess at a wal_keep_segments number.


I dont use slot.

To really answer this we will need to see the exact commands you are 
using and the sequence they are done in.


Replication server is created using

/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainold
pg_basebackup --write-recovery-conf -D /var/lib/postgresql/12/main
chmod --recursive --verbose 0700 /var/lib/postgresql/12/main
chown -Rv postgres:postgres /var/lib/postgresql/12/main
/etc/init.d/postgresql start

Andrus.





Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-05-31 Thread Andrus

Hi!


I want to create  hot standby async server using
/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainold
pg_basebackup --write-recovery-conf -D /var/lib/postgresql/12/main

I don't see where the base backup is being taken from just where it is going.


It is taken from VPS server over 20 Mbit public internet connection.
Both servers are running Debian Linux.


I dont receive WALs.
If you are doing binary replication then you are receiving WALs. It just a matter of whether you are streaming them or shipping 
them over complete.


Using wal_keep_segments=360 also causes same wal file not found error after 
pg_basebackup.
master server has 6GB wal files.  wal log during pg_basebackup is much slower 
than 360.
Maybe pg_basebackup skips wal segments . Maybe using wal_compression=on causes 
the issue.

How to fix this ?
How to create base backup so that cluster is tranferred over internet faster? Maybe it can transferred in compressed form over 
internet.


Andrus.




Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-06-01 Thread Andrus

Hi!


I have tried to re-initiate replica serveral times in low-use time but this 
error occurs again.
remove the whole replica's PGDATA/* and do a pg_basebackup again. But before that, make sure wal_keep_segments in big enough on the 
master and,


I renamed whole cluster before pg_basebackup


just as much important, do a vacuumdb -a (takes much space during the process) 
and use archiving!


I run vacuumdb --full --all before pg_basebackup


If named replication slot is used commands like
vacuumdb --all --full
will cause main server crash due to disk space limit. pg_wal directory will 
occupy free disk space. After that main server stops.
if you have disk constraints you will run into trouble sooner or later anyway. Make sure, you have enough disk space. There's no 
way around that anyway.


This space is sufficient for base backup and replication.


I tried using wal_keep_segments =180
Will setting wal_keep_segments to higher value allw replication start after 
pg_basebackup ?
it depends. If you start the replica immediately and don't wait for hours or days, you should be good to go. But that depends on 
different factors, for example, how >many WAL files are written during the pg_basebackup and pg_ctl start of the replica. If more 
than 180 WALs have gone by on the master because it is really busy, >then you're probably lost again. Point being, you'll have to 
launch the replica before WALs are expired!

Again: Make sure you have enough disk space, use archiving and use a 
replication slot.


I tried with wal_keep_segments=360 but problem persisists.
Server generates lot of less than 300 wal files.

Shell script starts server after pg_basebackup completes automatically:

PGHOST=example.com
PGPASSWORD=mypass
PGUSER=replikaator
export PGHOST  PGPASSWORD PGUSER
/etc/init.d/postgresql stop
mv /var/lib/postgresql/12/main /var/lib/postgresql/12/mainennebaasbakuppi
pg_basebackup --verbose --progress --write-recovery-conf -D 
/var/lib/postgresql/12/main
chmod --recursive --verbose 0700 /var/lib/postgresql/12/main
chown -Rv postgres:postgres /var/lib/postgresql/12/main
/etc/init.d/postgresql start

How to create replication server ?

Andrus.





Re: How to start slave after pg_basebackup. Why min_wal_size and wal_keep_segments are duplicated

2020-06-01 Thread Andrus

Hi!


How to create replication server ?
I always do it this way and it work for me:
$ pg_basebackup -h ${PGHOST} -p ${PGPORT} -U replicator -W -R -D ${PGDATA} -P 
-v -Fp -Xs
After that, I edit ${PGDATA}/postgresql.conf and (w/ PostgreSQL 11 and older 
${PGDATA}/recovery.conf) to make it do what I want and
then I just launch it:
$ pg_ctl start


My script does the same thing as your comands.


From that moment onward, it replicates and applies to the replica. Checks in 
pg_stat_replication on the master and
pg_stat_wal_receiver on the replica >confirm that. They also show the WAL 
switches.
To provoke a WAL switch I always do:
postgres=# checkpoint; select pg_switch_wal();
CHECKPOINT
pg_switch_wal
I just don't understand what you're trying to achieve here.


I want to create replication server.


My guess is, you want to stop and backup the old database cluster,


Old cluster is empty, from initdb. Backup is not needed


then create a new one in its old directory, right?


pg_basebackup creates new main directory.


In this case, you probably need to change your script to something like this:
PGHOST=remote.example.com
PGPASSWORD=mypass
PGUSER=replikaator
PGDATA=/var/lib/postgresql/12/main
export PGHOST PGPASSWORD PGUSER PGDATA
/etc/init.d/postgresql stop
mv ${PGDATA} /var/lib/postgresql/12/mainennebaasbakuppi
pg_basebackup -h ${PGHOST} -p ${PGPORT} -U ${PGUSER} -W -R -D ${PGDATA} -P -v 
-Fp -Xs
/etc/init.d/postgresql start


pg_basebackup uses environment varuables if not specified in command line. So 
my script does the same thing.


Note that my invocation of pg_basebackup asks for the replicator password. This 
is intended. You'd probably want to change that.
Also, no need to play around with ownership and permissions. Do it as "postgres", not as 
"root".


I tried

sudo --user=postgres pg_basebackup 

but got error

could not change directory to "/root": Permission denied

Andrus.





canceling statement due to conflict with recovery after pg_basebackup

2020-06-03 Thread Andrus

Hi!

Async binary replication hot standby was started after pg_basebackup.
Running query in slave throws error 


ERROR:  canceling statement due to conflict with recovery

Why ?
Query should return table and other sizes in decreasing order.
How to improve it so that this error does not occur.


Log:

2020-06-03 09:40:52 EEST  LOG:  database system was interrupted; last known up 
at 2020-06-03 07:59:56 EEST
2020-06-03 09:41:10 EEST  LOG:  entering standby mode
2020-06-03 09:41:10 EEST  LOG:  redo starts at 2E2/28
2020-06-03 09:41:19 EEST  LOG:  consistent recovery state reached at 2E2/B5A56C8
2020-06-03 09:41:19 EEST  LOG:  database system is ready to accept read only 
connections
2020-06-03 09:41:19 EEST  LOG:  started streaming WAL from primary at 
2E2/C00 on timeline 1
2020-06-03 09:54:23 EEST 85.253.131.166 user@sba ERROR:  canceling statement 
due to conflict with recovery
2020-06-03 09:54:23 EEST 85.253.131.166 user@sba DETAIL:  User query might have 
needed to see row versions that must be removed.
2020-06-03 09:54:23 EEST 85.253.131.166 user@sba STATEMENT:  select
company_name(n.nspname)::char(20) as company,
 relname::char(25),
   pg_size_pretty(pg_total_relation_size(c.oid))::char(10) as totalsize,
n.nspname::char(12),
   case
  when c.relkind='i' then 'index'
   when c.relkind='t' then 'toast'
  when c.relkind='r' then 'table'
  when c.relkind='v' then 'view'
  when c.relkind='c' then 'composite type'
  when c.relkind='S' then 'sequence'
   else c.relkind::text
 end ::char(14) as "type"
from
   pg_class c
   left join pg_namespace n on n.oid = c.relnamespace
   left join pg_tablespace t on t.oid = c.reltablespace
where
   (pg_total_relation_size(c.oid)>>21)>0 and c.relkind!='t'
order by
   pg_total_relation_size(c.oid) desc

Andrus.




How to get previous log file

2020-06-28 Thread Andrus

Hi!

select pg_read_file(pg_current_logfile()) 


retrieves today log file.
Log files are in log directory:

/var/lib/postgresql/12/main/log# ls
...
postgresql-2020-06-08_00.log  postgresql-2020-06-18_00.log  
postgresql-2020-06-28_00.log
postgresql-2020-06-09_00.log  postgresql-2020-06-19_00.log

How get yesterday log file from remote client application using postgresql 
query ?
Using Postgres 12 in Debian.

Andrus.





How to create function returning numeric from string containing percent character

2020-07-22 Thread Andrus

val function should return numeric value from string up to first non-digit 
character, considering first decimal point also:

   val('1,2TEST')  should return 1.2
   val('1,2,3')  should return 1.2
   val('-1,2,3')  should return -1.2

I tried

   CREATE OR REPLACE FUNCTION public.VAL(value text)
 RETURNS numeric AS
   $BODY$
   SELECT coalesce(nullif('0'||substring(Translate($1,',','.'), 
'^-?[0-9]+\.?[0-9]*$'),''),'0')::numeric;
   $BODY$ language sql immutable;

but if string contains % character,

   select val('1,2%')

returns 0.

How to force it to return 1.2 ?

It should work starting from Postgres 9.0

Posted also in

https://stackoverflow.com/questions/63032072/how-to-create-function-returning-value-up-to-first-non-digit-decimal-charcater#63032126

Andrus.





How to distribute products to shop by amount of sales

2020-08-15 Thread Andrus

Hi!

Table of product types  contains desired quantities in shop

    create temp table producttype (
 productype char(10) primary key,
 desired integer check ( desired> 0)
    ) on commit drop ;
    insert into producttype values ( 'SHOE',3);

product table contains products and product types

    create temp table product  (
    product char(20) primary key,
    producttype char(10) references producttype
    ) on commit drop;

    insert into product values 
('SHOE1','SHOE'),('SHOE2','SHOE'),('SHOE3','SHOE'),('SHOE4','SHOE'),('SHOE5','SHOE');


Warehouse table contains quantities in warehouse to distribute

    create temp table warehouse (
  product char(20) primary key references product,
  quantity integer check ( quantity> 0)
    ) on commit drop ;
    insert into warehouse values ('SHOE1',50),('SHOE2',60),('SHOE3',70);

Shop table contains quantities in shop

    create temp table shop (
  product char(20) primary key references product,
  quantity integer check ( quantity> 0)
    ) on commit drop;
    insert into shop values ('SHOE4',1);
    insert into shop values ('SHOE5',1);

sales table contains sold quantities. Most sold intems should moved from 
stock  first


    create temp table sales (
  product char(20) primary key references product,
  quantity integer check ( quantity> 0)
    ) on commit drop;
    insert into sales values ('SHOE1',100),('SHOE2',200);

How to find product which should moved from warehouse to shop so that 
shop status will be increated to producttype.desired

quantity for products in warehouse ? Most sold products should moved first.
Only one product (quantity 1) should moved from each product code.

Using data abouve, there should be 3 shoes (producttype.desired) in shop 
but are only 2 (sum(shop.quantity) for shoes).

Most sold shoe in warehouse is SHOE2
So SHOE2 should be moved to shop from warehouse.

How to find products which should moved ?
Can some SELECT with window function used for this ?

PostgreSQL 9.3.5 is used.

Andrus.

Posted also in 
https://stackoverflow.com/questions/63433824/how-to-distribute-products-to-shop-by-amount-of-sales




Include 3 previous tokens in syntax error message

2020-12-27 Thread Andrus

Hi!

Postgres returns unreadable syntax error messageges like

Syntax error at or near ')'

If there are many ) characters in query, it is not possible to find the 
palce where error occurs.


STATEMENT_POSITION is difficult to use since drivers perform parameter 
replacements which makes this different from source position.


How to fix this so that 3 last tokens are returned in message like

Syntax error at or near ' i > )'

Andrus.


duplicate key value violates unique constraint pg_default_acl_role_nsp_obj_index

2021-01-05 Thread Andrus
efon,language,vabakuup,kasilfirma) on 
kasutaja to "testuser";

grant insert on logifail to "testuser";

Using

PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit


Andrus.


Error messages on duplicate schema names

2021-01-06 Thread Andrus

Hi!

ALTER DEFAULT PRIVILEGES IN SCHEMA public,public   GRANT all ON TABLES 
TO testoig;


Throws strange error

Tuple already updated by self

In other case which I posted duplicate schema causes another strange error

duplicate key value violates unique constraint 
"pg_default_acl_role_nsp_obj_index"DETAIL: Key (defaclrole, 
defaclnamespace, defaclobjtype)=(30152, 186783649, r) already exists.


Should duplicate schema names accepted or should their usage throw 
better error messages.


Andrus.



REASSIGN OWNED BY in current database only

2021-01-14 Thread Andrus

Hi!

Cluster contains lot databases.

All objects in database should owned by separate role  whose name is in form

databasename_owner

where databasename is this database name in cluster.

This role in granted to cluster users who should have acces to this 
database.


Database is restored  from other cluster where is has different name.

After that

REASSIGN OWNED BY originaldbname_owner TO restoreddbname_owner

is used to force this role in restored database.

This command changes also owner of global objects also.

How to change  owner only in current database, leaving global objects 
owner unchanged ?


Andrus.


permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

Database does not contain large objects.

pg_dump starts to throw error

ERROR:  permission denied for large object 200936761

Tried

select * from "200936761"

but it returned "relation does not exist"

How to fix this ? How to find which table causes this error ?

How to find and delete all large objects in database ? Maybe it is 
created accidently .


Using

 PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit


Andrus.



Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!


Database does not contain large objects.

pg_dump starts to throw error

ERROR:  permission denied for large object 200936761


Did you do the pg_dump as a superuser?


No.

pg_dump needs to be invoked by non-superuser also. It backs up two 
schemas, public and firma74 .


-n public -n firma74

command line options are used.



You can query that to see what is there. I would not go about deleting 
until you find what the large objects are for.



select * from pg_largeobject

returns empty table.

Database has approx 50 schemas and many thousands of tables.

Andrus.



Re: permission denied for large object 200936761

2021-02-01 Thread Andrus

Hi!

>Well the user that runs the pg_dump needs to have permissions on the 
large objects. For more information see below.


How to add permissions to non-superusers for this.?

GRANT command

GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] }
ON LARGE OBJECT/|loid|/  [, ...]
TO/|role_specification|/  [, ...] [ WITH GRANT OPTION ]

Requires large object id. How to grant backup privilege if there are no 
large objects in database ?


Should *lo_compat_privileges
*

set in postgresql.conf  or is there better way ?


-n public -n firma74

command line options are used.



You can query that to see what is there. I would not go about 
deleting until you find what the large objects are for.



select * from pg_largeobject

returns empty table.



I haven't used large objects in a while. Forgot that they now have 
permissions associated with them. Try:


https://www.postgresql.org/docs/12/catalog-pg-largeobject-metadata.html

instead.


select * from  pg_largeobject_metadata

returns 3 rows:

Oid  Lomowner

  200936761   30152

  200936762  30152

  200936767   30152

How to find table and schema which is referenced by this ?

Andrus.




  1   2   >