How to create function returning numeric from string containing percent character

2020-07-22 Thread Andrus

val function should return numeric value from string up to first non-digit 
character, considering first decimal point also:

   val('1,2TEST')  should return 1.2
   val('1,2,3')  should return 1.2
   val('-1,2,3')  should return -1.2

I tried

   CREATE OR REPLACE FUNCTION public.VAL(value text)
 RETURNS numeric AS
   $BODY$
   SELECT coalesce(nullif('0'||substring(Translate($1,',','.'), 
'^-?[0-9]+\.?[0-9]*$'),''),'0')::numeric;
   $BODY$ language sql immutable;

but if string contains % character,

   select val('1,2%')

returns 0.

How to force it to return 1.2 ?

It should work starting from Postgres 9.0

Posted also in

https://stackoverflow.com/questions/63032072/how-to-create-function-returning-value-up-to-first-non-digit-decimal-charcater#63032126

Andrus.





Re: Logical replication from 11.x to 12.x and "unique key violations"

2020-07-22 Thread Thomas Kellerer
Thomas Kellerer schrieb am 13.07.2020 um 11:52:
> If I create the publication with all needed tables (about 50) at
> once, I get "duplicate key value violates unique constraint xxx_pkey"
> errors during the initial replication (when creating the
> subscription).

Turns out the problem was - once again - between the keyboard and the chair.

The script/job to initialize the database structure (to make sure primary and 
secondary are in sync) also includes statements to setup the lookup tables. So 
obviously the initial sync would fail for those tables. When I re-started 
replication I truncated all tables without re-running the setup script.

I wonder if it makes sense to add an "on conflict (pk) do nothing" option to 
the logical replication features ;)

Sorry for the noise
Thomas





Re: Switching Primary Keys to BigInt

2020-07-22 Thread Daniel Verite
Mohamed Wael Khobalatte wrote:

> > alter table change_seq alter COLUMN id set data
> > type bigint;

> This is significant downtime, since it locks exclusively, no? We want to
> avoid that.

Well, in the steps you mentioned upthread, the transaction starts by
doing LOCK TABLE some_table, so it will hold an exclusive lock on it
for the rest of the transaction.

If you can test how the ALTER TABLE... SET TYPE ... compares
to your procedure in terms of downtime, that would be interesting.
To me, it's not clear why the procedure in multiple steps would
be better overall than a single ALTER TABLE.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: https://www.manitou-mail.org
Twitter: @DanielVerite




Re: Switching Primary Keys to BigInt

2020-07-22 Thread Mohamed Wael Khobalatte
On Wed, Jul 22, 2020 at 9:27 AM Daniel Verite 
wrote:

> Mohamed Wael Khobalatte wrote:
>
> > > alter table change_seq alter COLUMN id set data
> > > type bigint;
>
> > This is significant downtime, since it locks exclusively, no? We want to
> > avoid that.
>
> Well, in the steps you mentioned upthread, the transaction starts by
> doing LOCK TABLE some_table, so it will hold an exclusive lock on it
> for the rest of the transaction.
>
> If you can test how the ALTER TABLE... SET TYPE ... compares
> to your procedure in terms of downtime, that would be interesting.
> To me, it's not clear why the procedure in multiple steps would
> be better overall than a single ALTER TABLE.
>

We lock the table as a precaution, with the understanding that we are
undergoing a "small" downtime to finish replacing the int id by the new
bigint. The only slow thing in my procedure is the sequential scan that the
ADD CONSTRAINT does because the column is a primary key. A direct alter
table would be far slower, not to mention space requirements?


Re: Logical replication from 11.x to 12.x and "unique key violations"

2020-07-22 Thread Adrian Klaver

On 7/21/20 10:36 PM, Thomas Kellerer wrote:

Adrian Klaver schrieb am 21.07.2020 um 17:07:

No, as mentioned, those are varchar(20) columns.
The values are generated by the application (no default value defined for the 
column)


Aah I see my mistake I was going off your follow up question not the
original post. In that original post though you had the PK containing
a varchar(100) column. Can we see the table schema and the PK
definition for at least one of the tables that threw an error?



Sorry about the confusion, some PKs are indeed defined as varchar(100) some as 
varchar(20) and some as varchar(15)
And I was also wrong about the generation, there is indeed a default value 
defined using a self-written ID generation function.
But during replication, that function isn't called, so it shouldn't matter, I 
guess.

Here are two examples of failing tables:

 CREATE TABLE IF NOT EXISTS emp_status
 (
emp_status_id   varchar(15)   DEFAULT generate_id('EA') NOT NULL 
PRIMARY KEY,
status_name varchar(20)   NOT NULL UNIQUE
 );

 CREATE TABLE IF NOT EXISTS company
 (
comp_id varchar(15)   DEFAULT generate_id('CO') NOT NULL PRIMARY 
KEY,
namevarchar(50)   NOT NULL UNIQUE,
country varchar(50)   NOT NULL,
codevarchar(20)   NOT NULL
 );

Both tables only contain only a few rows (less than 10) and e.g. for the status 
lookup, the log entry was:

LOG:  logical replication table synchronization worker for subscription "foo", table 
"emp_status" has started
ERROR:  duplicate key value violates unique constraint "emp_status_pkey"
DETAIL:  Key (employee_available_status_id)=(BUJ4XFZ7ATY27EA) already exists.
CONTEXT:  COPY employee_available_status, line 1


I know your subsequent post explained the problem. Still I'm not fully 
understanding the above. I understand the "duplicate key value violates 
unique constraint "emp_status_pkey" part. What I don't understand is 
where employee_available_status_id and employee_available_status are 
coming from? Or is this a copy/paste issue?




Thomas





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Switching Primary Keys to BigInt

2020-07-22 Thread Daniel Verite
Mohamed Wael Khobalatte wrote:

> We lock the table as a precaution, with the understanding that we are
> undergoing a "small" downtime to finish replacing the int id by the new
> bigint

Ah, sorry I overlooked that most row updates are done
pre-transaction in a preliminary step:
  /* in batches, we update all the rows to new_id = id */


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: https://www.manitou-mail.org
Twitter: @DanielVerite




Problem with pg_service.conf

2020-07-22 Thread Michał Lis

Hello,

I can't connect to a database via service defined in pg_service.conf 
file from remote machine.


Connection from local machine using this service definition works fine.

I tested it on PostgreSQL 9.4 and 9.6. The other problem is with the 
localization of the pg_service.conf file.


Reading internet solutions, I found, that this file may be placed in 
global configuration folder.


In my case it is D:\PostgreSQLx86\9.6\etc (the SYSCONFDIR variable 
displayed by pg_config.exe).


Placing the file into this folder does not work. I had to define the 
global system variable PGSERVICEFILE = 
D:\PostgreSQLx86\9.6\etc\pg_service.conf.


After that the connection to database via service began work, but only 
on local machine.


To reproduce problem:

1. Create pg_service.conf file like this:

[test4]
host=192.168.1.2
port=5433
dbname=test
user=postgres
password=abcd

2. Save this file in a directory, in my case it was
D:\PostgreSQLx86\9.6\etc\pg_service.conf
(line ends in Linux style by NotePad++)

3. Set the global system variable:
PGSERVICEFILE =D:\PostgreSQLx86\9.6\etc\pg_service.conf

4. Restart computer to take effects for setting PGSERVICEFILE

5. Open PGAdmin 3 or 4 no matter which.

6. Add and Set database connection
- Name: test
- Host: 192.168.1.2
- Port: 0
- Service: test4
- User: 
- Password: 

On the local machine the connection will be established but on remote 
machine the error:


"Definition of service "test4" not found"

will be raised

Connection from remote machine to server by host, port, user and 
password works fine.


PostgreSQL version: 9.6.11 and 9.4
Operating system: Windows 7 x64 Pro/Ultimate

Regards
Michal







HA setup with pg pool in docker

2020-07-22 Thread Vasu Madhineni
Hi Team,

We are planning to build a HA setup with pgpool in docker, Could you please
let us know prerequisites for it.

1. How many servers are needed for this?, can we plan with 2 servers like
below config.
   Primary instance and pgpool in two different docker containers in server
1 and standby instance in docker container server 2.

2. User requirement: any user needs password less authentication between
nodes.

3. Port or firewall rules between servers.

4. If planning to create a network bridge between all docker containers
between 2 servers will be any prerequisites for that.

Thanks in advance.

Regards,
Vasu Madhineni


CASCADE/fkey order

2020-07-22 Thread Samuel Nelson
Hi all,

We've got an interesting case where we want deletes to cascade if one table
was hit directly, but not another.  We can show that the delete _would_
cascade from one foreign key relationship, but the delete is actually
blocked by the foreign key constraint from the other relationship.

A sort of simplified view of the tables:
create table foo (
id integer primary key generated always as identity
);

create table bar (
id integer primary key generated always as identity
);

create table foo_bar (
foo_id integer not null,
bar_id integer not null,
primary key (foo_id, bar_id)
);

alter table foo_bar add constraint foo_bar_foo foreign key (foo_id)
references foo(id) on delete cascade;
alter table foo_bar add constraint foo_bar_bar foreign key (bar_id)
references bar(id);

create table baz (
id integer primary key generated always as identity,
foo_id integer not null
);

alter table baz add constraint baz_foo foreign key (foo_id) references
foo(id) on delete cascade;

create table bazinga (
id integer primary key generated always as identity,
foo_id integer not null,
bar_id integer not null,
baz_id integer not null
);

alter table bazinga add constraint bazinga_foo_bar foreign key (foo_id,
bar_id) references foo_bar (foo_id, bar_id);
alter table bazinga add constraint bazinga_baz foreign key (baz_id)
references baz(id) on delete cascade;

What we wanted to happen:
delete from foo where id = 3;
-- cascades through the tree, deleting rows in bazinga through the baz_id
relationship

delete from foo_bar where foo_id = 3 and bar_id = 1;
-- violates foreign key constraint bazinga_foo_bar
-- (this works as expected)

What actually happened:
delete from foo where id = 3;
-- violates foreign key constraint bazinga_foo_bar

How I've currently fixed it:
alter table bazinga add constraint bazinga_foo foreign key (foo_id)
references foo(id) on delete cascade;
-- this foreign key relationship seems to be cascaded to earlier in the
query

My questions:

What is the order of operations between cascading deletes and constraint
checking?  From what I can tell from the above, it seems like the delete
cascades to each table in turn, and the constraints are checked at the time
that the table is hit.

How do I know which table will be cascaded to first?

Is there a way to force the delete to cascade to tables in a specific order?

-Sam

https://github.com/nelsam

"As an adolescent I aspired to lasting fame, I craved factual certainty, and
I thirsted for a meaningful vision of human life -- so I became a scientist.
This is like becoming an archbishop so you can meet girls."
-- Matt Cartmill


Re: Logical replication from 11.x to 12.x and "unique key violations"

2020-07-22 Thread Thomas Kellerer

Adrian Klaver schrieb am 22.07.2020 um 16:42:

Or is this a copy/paste issue?


That.




Re: CASCADE/fkey order

2020-07-22 Thread David G. Johnston
On Wed, Jul 22, 2020 at 8:24 AM Samuel Nelson 
wrote:

> Is there a way to force the delete to cascade to tables in a specific
> order?
>

No really, but you can defer constraint checking.

https://www.postgresql.org/docs/12/sql-set-constraints.html

David J.


Re: Problem with pg_service.conf

2020-07-22 Thread Ron

On 7/21/20 4:35 PM, Michał Lis wrote:

Hello,

I can't connect to a database via service defined in pg_service.conf file 
from remote machine.


Connection from local machine using this service definition works fine.

I tested it on PostgreSQL 9.4 and 9.6. The other problem is with the 
localization of the pg_service.conf file.


Reading internet solutions, I found, that this file may be placed in 
global configuration folder.


In my case it is D:\PostgreSQLx86\9.6\etc (the SYSCONFDIR variable 
displayed by pg_config.exe).


Placing the file into this folder does not work. I had to define the 
global system variable PGSERVICEFILE = 
D:\PostgreSQLx86\9.6\etc\pg_service.conf.


After that the connection to database via service began work, but only on 
local machine.


To reproduce problem:

1. Create pg_service.conf file like this:

[test4]
host=192.168.1.2
port=5433
dbname=test
user=postgres
password=abcd

2. Save this file in a directory, in my case it was
D:\PostgreSQLx86\9.6\etc\pg_service.conf
(line ends in Linux style by NotePad++)

3. Set the global system variable:
PGSERVICEFILE =D:\PostgreSQLx86\9.6\etc\pg_service.conf

4. Restart computer to take effects for setting PGSERVICEFILE


Are you sure that PGSERVICEFILE is being defined correctly?



5. Open PGAdmin 3 or 4 no matter which.

6. Add and Set database connection
- Name: test
- Host: 192.168.1.2
- Port: 0
- Service: test4
- User: 
- Password: 

On the local machine the connection will be established but on remote 
machine the error:


"Definition of service "test4" not found"

will be raised

Connection from remote machine to server by host, port, user and password 
works fine.


PostgreSQL version: 9.6.11 and 9.4
Operating system: Windows 7 x64 Pro/Ultimate

Regards
Michal







--
Angular momentum makes the world go 'round.




Re: Problem with pg_service.conf

2020-07-22 Thread Adrian Klaver

On 7/21/20 2:35 PM, Michał Lis wrote:

Hello,

I can't connect to a database via service defined in pg_service.conf 
file from remote machine.


Connection from local machine using this service definition works fine.

I tested it on PostgreSQL 9.4 and 9.6. The other problem is with the 
localization of the pg_service.conf file.


Reading internet solutions, I found, that this file may be placed in 
global configuration folder.


In my case it is D:\PostgreSQLx86\9.6\etc (the SYSCONFDIR variable 
displayed by pg_config.exe).


Placing the file into this folder does not work. I had to define the 
global system variable PGSERVICEFILE = 
D:\PostgreSQLx86\9.6\etc\pg_service.conf.


After that the connection to database via service began work, but only 
on local machine.


To reproduce problem:

1. Create pg_service.conf file like this:

[test4]
host=192.168.1.2
port=5433
dbname=test
user=postgres
password=abcd

2. Save this file in a directory, in my case it was
D:\PostgreSQLx86\9.6\etc\pg_service.conf
(line ends in Linux style by NotePad++)

3. Set the global system variable:
PGSERVICEFILE =D:\PostgreSQLx86\9.6\etc\pg_service.conf

4. Restart computer to take effects for setting PGSERVICEFILE

5. Open PGAdmin 3 or 4 no matter which.

6. Add and Set database connection
- Name: test
- Host: 192.168.1.2
- Port: 0
- Service: test4
- User: 
- Password: 

On the local machine the connection will be established but on remote 
machine the error:


"Definition of service "test4" not found"

will be raised

Connection from remote machine to server by host, port, user and 
password works fine.


Where is the server located?

Are you on remote machine when you try to use the service file?

If so, that is not going to work as the remote machine will not have 
access to the local environment.





PostgreSQL version: 9.6.11 and 9.4
Operating system: Windows 7 x64 Pro/Ultimate

Regards
Michal








--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CASCADE/fkey order

2020-07-22 Thread Samuel Nelson
I checked, and changing the `bazinga_foo_bar` constraint to:

alter table bazinga add constraint bazinga_foo_bar foreign key (foo_id,
bar_id) references foo_bar (foo_id, bar_id) deferrable initially deferred;

seems to fix it to work as we were expecting.  Is that particularly
costly?  Should I only set the constraint to be deferred when we really
need it?  Would it be more efficient to perform the deletes explicitly
within a transaction rather than relying on the cascades and deferring that
one constraint?

Our resident ex-Oracle DBA said that deferred constraints used to be
heavily recommended against, but he also admitted that he hasn't kept up
with that in the past 10 years.

-Sam

https://github.com/nelsam

"As an adolescent I aspired to lasting fame, I craved factual certainty, and
I thirsted for a meaningful vision of human life -- so I became a scientist.
This is like becoming an archbishop so you can meet girls."
-- Matt Cartmill


On Wed, Jul 22, 2020 at 10:31 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Jul 22, 2020 at 8:24 AM Samuel Nelson 
> wrote:
>
>> Is there a way to force the delete to cascade to tables in a specific
>> order?
>>
>
> No really, but you can defer constraint checking.
>
> https://www.postgresql.org/docs/12/sql-set-constraints.html
>
> David J.
>
>


Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Surya Widyanto

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running Windows 10 
Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and windows 
on 17-05-2020, 06:04:56 (GMT).
By two or three times I mean, if this slow connection problem happen, 
than it wil occur in a view days in a row. Then it will return to normal 
with fast login/connection process.


I've try to restart the laptop, restart the service but nothing give the 
sign to improve the slow login/connection process.


I've also try to turn off the *log_hostname* config in *postgresql.conf* 
and try to turn off the *Windows Firewall, Windows Defender Real-Time 
Scanner* but nothing works to improve the slow login connection.


I've also try to check the Postgres Log file but not much error shows in 
the log except for this error that appears a few time:
2020-07-22 14:43:22.312 +00 [13740] LOG:  could not receive data from 
client: An existing connection was forcibly closed by the remote host.


This problem happens with all of the application such as *psql*, 
*pgAdmin*, *Navicat*, all with the same symptoms of slow 
login/connection to my PostgreSQL v12.3 (x64) service and if connected, 
then it will be easy to disconnected.


If needed my laptop specs are:

 * Intel Core i7-9750H
 * 16GB of RAM
 * Samsung SSD 970 EVO Plus 2TB

Is database size can affect this connection time required to start a 
database session?
FYI one of my database has size of 209GB, but currently I'm not using or 
connect to that database.


Thanks in advance.

Regards,
Surya




Re: Transaction control in SECURITY DEFINER procedures

2020-07-22 Thread Bruce Momjian
On Tue, Jul  7, 2020 at 12:13:42PM +0100, Chris Sterritt wrote:
> The documentation for CREATE PROCEDURE informs us "A SECURITY DEFINER 
> procedure
> cannot execute transaction control statements (for example, COMMIT and 
> ROLLBACK
> , depending on the language)."
> 
> Can anyone let me know why this is so and are there any plans to remove this
> restriction in future releases?

I have a reproducible case:

CREATE OR REPLACE PROCEDURE transcheck () AS $$
BEGIN
PERFORM 1;
COMMIT;
END;
$$ LANGUAGE plpgsql;

CALL transcheck ();

ALTER PROCEDURE transcheck SECURITY DEFINER;

CALL transcheck ();
--> ERROR:  invalid transaction termination
--> CONTEXT:  PL/pgSQL function transcheck() line 4 at COMMIT

and this is the reason:

commit 3884072329 Author: Peter Eisentraut 
Date:   Wed Jul 4 09:26:19 2018 +0200

Prohibit transaction commands in security definer procedures

Starting and aborting transactions in security definer
procedures doesn't work.  StartTransaction() insists that
the security context stack is empty, so this would currently
cause a crash, and AbortTransaction() resets it.  This could
be made to work by reorganizing the code, but right now we
just prohibit it.

Reported-by: amul sul  Discussion:

https://www.postgresql.org/message-id/flat/CAAJ_b96Gupt_LFL7uNyy3c50-wbhA68NUjiK5%3DrF6_w%3Dpq_T%3DQ%40mail.gmail.com

so, yes, it is possible, but no one has implemented it.  This is the
first complaint I have heard about this.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Switching Primary Keys to BigInt

2020-07-22 Thread Mohamed Wael Khobalatte
On Wed, Jul 22, 2020 at 11:13 AM Daniel Verite 
wrote:

> Mohamed Wael Khobalatte wrote:
>
> > We lock the table as a precaution, with the understanding that we are
> > undergoing a "small" downtime to finish replacing the int id by the new
> > bigint
>
> Ah, sorry I overlooked that most row updates are done
> pre-transaction in a preliminary step:
>   /* in batches, we update all the rows to new_id = id */
>

No worries. I suppose the answer to the original question, which is how to
avoid a table scan when adding a primary key constraint to a newly
backfilled column is "there is no way"? Downtime might be at least as long
as the table scan.


Re: Doubt in mvcc

2020-07-22 Thread Bruce Momjian
On Mon, Jul 13, 2020 at 10:41:28AM +0200, Francisco Olarte wrote:
> Rama:
> 
> On Mon, Jul 13, 2020 at 9:52 AM Rama Krishnan  wrote:
> > I m preparing for interview one of the recruiter asked me mvcc drawbacks as 
> > i told due to mvcc it use more space and need to perform maintenance 
> > activity.
> > Another one is the same data causes an update conflict because two 
> > different transactions can update the same version of the row.
> >  he told its wrong, kindly tell me will you please tell me its correct or 
> > wrong?
> 
> I'm not sure I understand your question too well, you may want to
> refresh/expand.
> 
> One interpretation is, on a pure MVCC contest, two transactions, say 5
> and 6, could try to update a tuple valid for [1,) and end up
> generating two new tuples, [5,), [6,) and closing the original at
> either [1,5) or [1,6) .
> 
> That's why MVCC is just a piece, locking is other. On a MVCC the
> tuples are locked while a transaction manipulates them. Other
> transactions may read them, which is why readers do not block writers,
> but two updates on the same tuple serialize.

You might want to look at this:

https://momjian.us/main/presentations/internals.html#mvcc

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Adrian Klaver

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running Windows 10 
Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and windows 
on 17-05-2020, 06:04:56 (GMT).
By two or three times I mean, if this slow connection problem happen, 
than it wil occur in a view days in a row. Then it will return to normal 
with fast login/connection process.


I've try to restart the laptop, restart the service but nothing give the 
sign to improve the slow login/connection process.


I've also try to turn off the *log_hostname* config in *postgresql.conf* 
and try to turn off the *Windows Firewall, Windows Defender Real-Time 
Scanner* but nothing works to improve the slow login connection.


I've also try to check the Postgres Log file but not much error shows in 
the log except for this error that appears a few time:
2020-07-22 14:43:22.312 +00 [13740] LOG:  could not receive data from 
client: An existing connection was forcibly closed by the remote host.


Is the database server you are connecting to on the laptop or some other 
machine?


Where is this Postgres log?



This problem happens with all of the application such as *psql*, 
*pgAdmin*, *Navicat*, all with the same symptoms of slow 
login/connection to my PostgreSQL v12.3 (x64) service and if connected, 
then it will be easy to disconnected.


If needed my laptop specs are:

  * Intel Core i7-9750H
  * 16GB of RAM
  * Samsung SSD 970 EVO Plus 2TB

Is database size can affect this connection time required to start a 
database session?


No, a connection does not read the entire database.

FYI one of my database has size of 209GB, but currently I'm not using or 
connect to that database.


Thanks in advance.

Regards,
Surya





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Switching Primary Keys to BigInt

2020-07-22 Thread Michael Lewis
On Wed, Jul 22, 2020 at 12:23 PM Mohamed Wael Khobalatte <
mkhobala...@grubhub.com> wrote:

> No worries. I suppose the answer to the original question, which is how to
> avoid a table scan when adding a primary key constraint to a newly
> backfilled column is "there is no way"? Downtime might be at least as long
> as the table scan.
>

One presumes you may be planning to use pglogical or another similar
solution to upgrade to a new Postgres version soon, and would have a
convenient time then to change schema. I am curious, why not just stick
with the single column unique index and forgo for the primary key
constraint for now? If you are concerned about the possibility of a single
null value being inserted, then you could add a not valid check constraint
to enforce that for future rows.

You do you. Obviously testing the primary key on a full replica of the data
with similar hardware and configs will give you a pretty good idea of the
time for that tablescan and adding the constraint in real life. Given your
email domain, I can guess why you would need to absolutely minimize
downtime.


Re: Switching Primary Keys to BigInt

2020-07-22 Thread Mohamed Wael Khobalatte
>
> One presumes you may be planning to use pglogical or another similar
> solution to upgrade to a new Postgres version soon, and would have a
> convenient time then to change schema. I am curious, why not just stick
> with the single column unique index and forgo for the primary key
> constraint for now? If you are concerned about the possibility of a single
> null value being inserted, then you could add a not valid check constraint
> to enforce that for future rows.
>

Believe it or not I pitched a similar idea internally. I'll explore it in a
test run.


> Given your email domain, I can guess why you would need to absolutely
> minimize downtime.
>

Right. ;)


Re: Switching Primary Keys to BigInt

2020-07-22 Thread Mohamed Wael Khobalatte
> Believe it or not I pitched a similar idea internally. I'll explore it in
a test run.

By similar idea, I am referencing your suggestion of dropping the primary
key constraint.

>


Re: Switching Primary Keys to BigInt

2020-07-22 Thread Ireneusz Pluta/wp.pl

W dniu 2020-07-21 o 17:30, Mohamed Wael Khobalatte pisze:

we are planning to move some primary keys from int to bigint because we are 
approaching the type limit
If that does not break your business logic, you might arrange to use the negative half of the ::int 
value range. Ugly, but this might at least buy you some time before finding the definite and elegant 
way, if you are under some pressure. I do not recommend this, but this is what once saved my life 
(or at least one night), after I realized that my PK already reached the limit :-).





Re: Switching Primary Keys to BigInt

2020-07-22 Thread Mohamed Wael Khobalatte
On Wed, Jul 22, 2020 at 4:45 PM Ireneusz Pluta/wp.pl  wrote:

> W dniu 2020-07-21 o 17:30, Mohamed Wael Khobalatte pisze:
> > we are planning to move some primary keys from int to bigint because we
> are approaching the type limit
> If that does not break your business logic, you might arrange to use the
> negative half of the ::int
> value range. Ugly, but this might at least buy you some time before
> finding the definite and elegant
> way, if you are under some pressure. I do not recommend this, but this is
> what once saved my life
> (or at least one night), after I realized that my PK already reached the
> limit :-).
>

Very clever. I think we are set with the current approach. The issue was
more how much downtime, not how fast we are approaching the limit (which is
also a real issue but not of concern in this thread).


Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Surya Widyanto

On 23-07-2020 02:07, Adrian Klaver wrote:

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running Windows 
10 Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and 
windows on 17-05-2020, 06:04:56 (GMT).
By two or three times I mean, if this slow connection problem happen, 
than it wil occur in a view days in a row. Then it will return to 
normal with fast login/connection process.


I've try to restart the laptop, restart the service but nothing give 
the sign to improve the slow login/connection process.


I've also try to turn off the *log_hostname* config in 
*postgresql.conf* and try to turn off the *Windows Firewall, Windows 
Defender Real-Time Scanner* but nothing works to improve the slow 
login connection.


I've also try to check the Postgres Log file but not much error shows 
in the log except for this error that appears a few time:
2020-07-22 14:43:22.312 +00 [13740] LOG:  could not receive data from 
client: An existing connection was forcibly closed by the remote host.


Is the database server you are connecting to on the laptop or some 
other machine?


The Database is on the same machine/laptop, I try to connect using 
*localhost* and *127.0.0.1*, both have the same slow connection problem.

Where is this Postgres log?

The path of the log is on 
*\log\postgresql-%Y-%m-%d_%H%M%S.log*
Is there any config that I can change to get more verbose log 
information to trace and debug the connection problem?


This problem happens with all of the application such as *psql*, 
*pgAdmin*, *Navicat*, all with the same symptoms of slow 
login/connection to my PostgreSQL v12.3 (x64) service and if 
connected, then it will be easy to disconnected.


If needed my laptop specs are:

  * Intel Core i7-9750H
  * 16GB of RAM
  * Samsung SSD 970 EVO Plus 2TB

Is database size can affect this connection time required to start a 
database session?


No, a connection does not read the entire database.

FYI one of my database has size of 209GB, but currently I'm not using 
or connect to that database.


Thanks in advance.

Regards,
Surya









Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Adrian Klaver

On 7/22/20 2:43 PM, Surya Widyanto wrote:

On 23-07-2020 02:07, Adrian Klaver wrote:

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,



Is the database server you are connecting to on the laptop or some 
other machine?


The Database is on the same machine/laptop, I try to connect using 
*localhost* and *127.0.0.1*, both have the same slow connection problem.

Where is this Postgres log?

The path of the log is on 
*\log\postgresql-%Y-%m-%d_%H%M%S.log*
Is there any config that I can change to get more verbose log 
information to trace and debug the connection problem?


Take a look here:

https://www.postgresql.org/docs/12/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN



Thanks in advance.

Regards,
Surya










--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Adrian Klaver

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running Windows 10 
Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and windows 
on 17-05-2020, 06:04:56 (GMT).


Meant to ask earlier, where did you install Postgres from?

By two or three times I mean, if this slow connection problem happen, 
than it wil occur in a view days in a row. Then it will return to normal 
with fast login/connection process.





Regards,
Surya





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: CASCADE/fkey order

2020-07-22 Thread David G. Johnston
On Wed, Jul 22, 2020 at 9:03 AM Samuel Nelson 
wrote:

> seems to fix it to work as we were expecting.  Is that particularly
> costly?  Should I only set the constraint to be deferred when we really
> need it?  Would it be more efficient to perform the deletes explicitly
> within a transaction rather than relying on the cascades and deferring that
> one constraint?
>

I don't know.  I tend to go with only deferring the check if the specific
transaction requires it.  If there are no issues I would presume that
checking at the end would be more efficient.  But if there are problems you
could end up performing unnecessary work.  Memory consumption probably
increases as well since constraint related information cannot be discarded
as each command completes but must be kept around for the eventual
validation.

David J.


Re: How to create function returning numeric from string containing percent character

2020-07-22 Thread David G. Johnston
On Wed, Jul 22, 2020 at 3:50 AM Andrus  wrote:

> val function should return numeric value from string up to first non-digit
> character, considering first decimal point also:
>
> val('1,2TEST')  should return 1.2
> val('1,2,3')  should return 1.2
> val('-1,2,3')  should return -1.2
>
> SELECT coalesce(nullif('0'||substring(Translate($1,',','.'),
> '^-?[0-9]+\.?[0-9]*$'),''),'0')::numeric;
> select val('1,2%')
> How to force it to return 1.2 ?
>
> It should work starting from Postgres 9.0


Removing the $ from your regexp pattern should work for the 4 provided
examples.

You cannot remove stuff from the end of a string if you require that the
end of said string match what you want to return.

David J.


Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Surya Widyanto

On 23-07-2020 05:26, Adrian Klaver wrote:

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running Windows 
10 Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and 
windows on 17-05-2020, 06:04:56 (GMT).


Meant to ask earlier, where did you install Postgres from?

I've download the installer executable from 
https://www.enterprisedb.com/downloads/postgres-postgresql-downloads, 
the Windows x86-64 version.


Then I've install it on my Windows with binaries path to *"C:\Program 
Files\PostgreSQL\12\" *and for data path to *"H:\PostgreSQL\Data\12.x\"*.
Both *C:* and *H:* drive are partition on my NVMe PCIe Gen. 3 x 4 SSD 
Drive mentioned earlier.


For additional information, I've also install the *PostGIS v3.0 r15475 
(x64)* for *PostgreSQL v12.*

**

By two or three times I mean, if this slow connection problem happen, 
than it wil occur in a view days in a row. Then it will return to 
normal with fast login/connection process.





Regards,
Surya









Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Rob Sargent


> On Jul 22, 2020, at 4:58 PM, Surya Widyanto  wrote:
> 
> On 23-07-2020 05:26, Adrian Klaver wrote:
>> On 7/22/20 10:24 AM, Surya Widyanto wrote: 
>>> Hi, 
>>> 
>>> I have PostgreSQL v12.3 (x64) installed on my Laptop running Windows 10 
>>> Home Single Language (64-bit). 
>>> In two or three times since I fresh install the PostgreSQL and windows on 
>>> 17-05-2020, 06:04:56 (GMT). 
>> 
>> Meant to ask earlier, where did you install Postgres from? 
>> 
> I've download the installer executable from 
> https://www.enterprisedb.com/downloads/postgres-postgresql-downloads 
> , the 
> Windows x86-64 version.
> 
> Then I've install it on my Windows with binaries path to "C:\Program 
> Files\PostgreSQL\12\" and for data path to "H:\PostgreSQL\Data\12.x\".
> Both C: and H: drive are partition on my NVMe PCIe Gen. 3 x 4 SSD Drive 
> mentioned earlier.
> 
> For additional information, I've also install the PostGIS v3.0 r15475 (x64) 
> for PostgreSQL v12.
> 
>>> By two or three times I mean, if this slow connection problem happen, than 
>>> it wil occur in a view days in a row. Then it will return to normal with 
>>> fast login/connection process. 
>>> 
>> 
>>> Regards, 
>>> Surya 
>>> 
>>> 
Is there by any chance a correlation between your physical location and the 
slowness episodes?
Any AV software in play?



>> 
>> 
> 



Re: Slow or Cannot Connect to PostgreSQL Instance Service on Windows 10

2020-07-22 Thread Surya Widyanto

On 23-07-2020 06:12, Rob Sargent wrote:



On Jul 22, 2020, at 4:58 PM, Surya Widyanto > wrote:


On 23-07-2020 05:26, Adrian Klaver wrote:

On 7/22/20 10:24 AM, Surya Widyanto wrote:

Hi,

I have PostgreSQL v12.3 (x64) installed on my Laptop running 
Windows 10 Home Single Language (64-bit).
In two or three times since I fresh install the PostgreSQL and 
windows on 17-05-2020, 06:04:56 (GMT).


Meant to ask earlier, where did you install Postgres from?

I've download the installer executable from 
https://www.enterprisedb.com/downloads/postgres-postgresql-downloads, 
the Windows x86-64 version.


Then I've install it on my Windows with binaries path to *"C:\Program 
Files\PostgreSQL\12\" *and for data path to *"H:\PostgreSQL\Data\12.x\"*.
Both *C:* and *H:* drive are partition on my NVMe PCIe Gen. 3 x 4 SSD 
Drive mentioned earlier.


For additional information, I've also install the *PostGIS v3.0 
r15475 (x64)* for *PostgreSQL v12.*


By two or three times I mean, if this slow connection problem 
happen, than it wil occur in a view days in a row. Then it will 
return to normal with fast login/connection process.





Regards,
Surya


Is there by any chance a correlation between your physical location 
and the slowness episodes?

Any AV software in play?



Since this COVID-19 outbreak, I'm working from home. I've try to use 
different internet connection to test, one with wired connection 
internet provider on my house, and the other are 4G cellular connection 
internet from my smartphone and both internet connection I use give the 
same effect of slow and easy to drop/disconnect connection to PostgreSQL.
For additional information, I've also had this problem when connectiong 
from PHP Application, so that total 4 apps I try to connect with 
PostgreSQL (psql, pgAdmin, Navicat, PHP Application Connection)
I'm not installing any AntiVirus software, the only installed AntiVirus 
software are WIndows Defender that came with Microsoft Windows. 
(Currently I'm Disabling the real time scan on windows defender)


After I activate the DEBUG5 for *log_min_messages* and 
*log_min_error_statement*, another line of log came up with the other 
log line I mentioned earlier:
2020-07-22 23:31:29.065 +00 [postgres@postgres] - 127.0.0.1(10191) - 
[1724] (psql) LOG:  could not receive data from client: An existing 
connection was forcibly closed by the remote host.
2020-07-22 23:31:29.065 +00 [postgres@postgres] - 127.0.0.1(10191) - 
[1724] (psql) DEBUG:  unexpected EOF on client connection





psql \r changed behavior in pg10

2020-07-22 Thread Emanuel Araújo
Hi,

When used PostgreSQL 9.6.* it was common use in psql \r\e to clear a buffer
query and open the empty text editor.  Works fine for me and it was very
useful in my tasks.

Since PostgreSQL 10, this behavior changed and not more clear the
buffer query.

psql helps show that \r reset (clear) the query buffer but in practice
don't work or I do not understand how would work.

Example using postgrsql 11.8 :

select 1;
\e
-> Open temp file with a last statement "select 1;"
exit text editor and run command.  It's ok.
\r
\e
-> Open temp file with the same last command "select 1;"
is it right?

-- 


*Regards,Emanuel Araújo*


Re: psql \r changed behavior in pg10

2020-07-22 Thread David G. Johnston
On Wednesday, July 22, 2020, Emanuel Araújo  wrote:

>
> \r
> \e
> -> Open temp file with the same last command "select 1;"
> is it right?
>
>
Documentation since v10:

Or, if the current query buffer is empty, the most recently executed query
is copied to a temporary file and edited in the same fashion.

David J.


Re: psql \r changed behavior in pg10

2020-07-22 Thread David G. Johnston
On Wednesday, July 22, 2020, David G. Johnston 
wrote:

> On Wednesday, July 22, 2020, Emanuel Araújo  wrote:
>
>>
>> \r
>> \e
>> -> Open temp file with the same last command "select 1;"
>> is it right?
>>
>>
> Documentation since v10:
>
> Or, if the current query buffer is empty, the most recently executed query
> is copied to a temporary file and edited in the same fashion.
>

I believe \e is now working as intended but assuming it worked differently
in 9.6 the behavior change did not get noticed and so no release note entry
was added for it.  Adding the new \if meta commands in v10 resulted in
reworking of the code probably causing this to change.  We fixed the docs
to match the expected behavior which was seen in v10 when the doc patch was
written.

https://github.com/postgres/postgres/commit/e984ef5861df4bc9733b36271d05763e82de7c04

David J.


Re: psql \r changed behavior in pg10

2020-07-22 Thread Tom Lane
"David G. Johnston"  writes:
> On Wednesday, July 22, 2020, Emanuel Araújo  wrote:
>> \r
>> \e
>> -> Open temp file with the same last command "select 1;"
>> is it right?

> Documentation since v10:
> Or, if the current query buffer is empty, the most recently executed query
> is copied to a temporary file and edited in the same fashion.

There's some discussion around that in this thread:

https://www.postgresql.org/message-id/flat/9b4ea968-753f-4b5f-b46c-d7d3bf7c8f90%40manitou-mail.org

The key point is that \r in this case used to clear the "previous query"
buffer since there was nothing for it to remove from the "current query"
buffer.  I argued then that that was confusing and counterproductive,
and I still think that.

regards, tom lane