Re: postgres 11 issue?

2019-07-05 Thread Steve Rogerson
On 03/07/2019 20:36, Tom Lane wrote:
>
> It looks like what's happening is that the result of my_from_local()
> is being stored into the table as an anonymous record value rather
> than a value of type my_timestamp.  (The originating session can figure
> out what the anonymous record type is, but no other session can.)
> That should be fixed, but in the meantime you could dodge the problem by
> declaring my_from_local()'s v_result variable as my_timestamp not record.
>
Thanks for the quick response. The "workaround" works, though it would seem
best practice in any case.

Steve




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
Well, I think it´ll not as easy as you said. That tables has dependencies.
So, if I try to alter type it gives me ERROR: cannot alter type of a column
used in a trigger definition. I dropped all Triggers of that table and it
gives me ERROR: cannot alter type of a column used by a view or rule. 
Then, if I need to drop everything to change that thing I think is better
just have an empty structure and copy entire database data to it. 

And why do we used domains, because we were Firebird and on that database a
integer or a domain based on an integer is the same, so we converted to
Postgres using that way. 

Thanks



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-05 Thread Peter J. Holzer
On 2019-07-05 10:59:31 +0200, Thomas Kellerer wrote:
> Gianni Ceccarelli schrieb am 05.07.2019 um 10:00:
> >> strict functions with sql null inputs yield sql null output without
> >> even executing the function
> > 
> > So when the SQL-level executor sees a call to any function declared
> > strict with some NULL parameters, it doesn't call the function at
> > all. `whatever_my_function('a string',1234,NULL)` is always `NULL`
> > 
> 
> Ah, I see. Thanks for the clarification
> 
> Then I would question if declaring jsonb_set as "strict" makes sense

I think it does but I raise the same question for to_jsonb. It's defined
on anyelement and the documentation says:

| Returns the value as json or jsonb. Arrays and composites are converted
| (recursively) to arrays and objects; otherwise, if there is a cast from
| the type to json, the cast function will be used to perform the
| conversion; otherwise, a scalar value is produced. For any scalar type
| other than a number, a Boolean, or a null value, the text representation
| will be used, in such a fashion that it is a valid json or jsonb value.

The documentation explicitely singles out "a number, a Boolean, or a
null value", but doesn't specify how they are treated. I would expect
that they are treated equivalently, though: An SQL number is converted
to a JSON number, an SQL boolean is converted to JSON true or false and
an SQL null is converted to JSON null. Returning SQL null instead of a
JSON null breaks that expectation for no discernible reason. It also
isn't consistent, since an SQL null in an array or composite is
converted to a JSON null, as I would expect.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Error: rows returned by function are not all of the same row type

2019-07-05 Thread Joe Conway
On 7/5/19 5:39 AM, Andrey Sychev wrote:
> Thank you very much for answering my question, Tom.
> Yes,  I  have  always  assumed  that  returning  from function without
> calling  SPI_freetuptable  is not good idea, but I do not know another
> way to achieve same result.


Please do not top post on the Postgres lists.


> 1. As I understand there are tests for SFRM_Materialize in code above.
> In  my  case  a  caller  of my function is PL/pgSQL  procedure.
> Does it accept this returning mode?


See contrib/tablefunc/tablefunc.c crosstab_hash() as an example.

> 2. Our current production server versions is 9.1 and 9.6.
> Do this versions support returning of tuplestore?

Works since Postgres 7.3 if I recall correctly.

> 3. Currently my function defined as "RETURNS SETOF".
> Does  definition  of  the function need to be changed if I rewrite code to
> return tuplestore?

No

HTH,

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: Active connections are terminated because of small wal_sender_timeout

2019-07-05 Thread AYahorau
Hello Everyone!
Do you have any thoughts regarding this issue? 

Best regards, 
Andrei Yahorau



From:   Andrei Yahorau/IBA
To: pgsql-gene...@postgresql.org, 
Date:   21/06/2019 11:14
Subject:Active connections are terminated because of small 
wal_sender_timeout


Hello PostgreSQL Community!

Not long ago I opened an issue regarding the problem about:
"terminating walsender process due to replication timeout" problem:
https://www.postgresql.org/message-id/OF85C33E30.171C1C23-ON432583F9.003F5B16-432583F9.003FBAD7%40iba.by
 


Thanks Everyone for your comprehensive explanations.
Basically I faced this problem because of small wal_sender_timeout value 
(1 second) and quite large volume of data to be replicated.
There were some assumptions about some problems with my network. But 
during my investigation I assured that it was not network problem.

I took a look in postgresql source code. As far as I understood walsender 
can send some data to walreceiver regarding some changes and so-called 
keepalive messages.
Exactly these keepalive messages walsender sends periodically once per 
wal_sender_timeout seconds (once per 1 second in my case) and expects to 
get responses from the walreceiver that everything goes ok. 

I switched on trace and repeated my test. I found out that walreceiver 
starts processing of the "keepalive" message in 3 seconds after its 
sending. 
As far as I understand it happened because walreceiver was busy by 
accepting all the changes from the master and writing it to its standby 
WAL logs.
So the standby postgres log was overloaded overfilled by the entries 
regarding writing its own WALs:
18642 2019-05-28 05:08:31 EDT 0 DEBUG: record known xact 559 
latestObservedXid 559
18642 2019-05-28 05:08:31 EDT 0 CONTEXT: WAL redo at 0/112CE9C0 for 
Heap/LOCK: off 53: xid 559: flags 0 LOCK_ONLY EXCL_LOCK
18642 2019-05-28 05:08:31 EDT 0 DEBUG: record known xact 559 
latestObservedXid 559
18642 2019-05-28 05:08:31 EDT 0 CONTEXT: WAL redo at 0/112CE9F8 for 
Heap/UPDATE: off 53 xmax 559 ; new off 129 xmax 0
18642 2019-05-28 05:08:31 EDT 0 DEBUG: record known xact 559 
latestObservedXid 559
18642 2019-05-28 05:08:31 EDT 0 CONTEXT: WAL redo at 0/112CEA48 for 
Heap/LOCK: off 54: xid 559: flags 0 LOCK_ONLY EXCL_LOCK
18642 2019-05-28 05:08:31 EDT 0 DEBUG: record known xact 559 
latestObservedXid 559
18642 2019-05-28 05:08:31 EDT 0 CONTEXT: WAL redo at 0/112CEA80 for 
Heap/UPDATE: off 54 xmax 559 ; new off 130 xmax 0
18642 2019-05-28 05:08:31 EDT 0 DEBUG: record known xact 559 
latestObservedXid 559
So because of writing large volume of data it was not able to handle the 
next messages quickly. It seems not to be related to network bandwidth or 
CPU saturation.

Thereby, I see some kind of a contradiction with the official description 
of wal_sender_timeout parameter:
Terminate replication connections that are inactive longer than the 
specified number of milliseconds.
This is useful for the sending server to detect a standby crash or network 
outage.

During my test the connection between master and standby was active and 
there was no any network outage. So according to the description there was 
no need to terminate replication connection.

So, I have some questions:
Is there any way (e. g. via configuration of other) to make the response 
time to keepalive messages independent of the amount of data that the 
walreceiver has to process?
If there is no such a way is it possible to update wal_sender_timeout 
documentation so it reflects reality?


Best regards, 
Andrei Yahorau


Unavailability of Jar for connectivity in Postgres

2019-07-05 Thread Abraham, Nikhil (COR), Vodafone Idea
Dear Team,
Good Wishes.

We are planning to migrate one of our critical applications from Oracle to 11.1 
Postgresql. The UI of the application calls lots of PL/SQL procedures. In order 
to have that calling facility there is a jar provided by the respective 
databases. Currently the ODBS-JDBC connectivity drivers are provided by Oracle 
in the form of jar. However connectivity from UI to call procedures in 
Postgresql is currently not possible in absence of such a driver. The reason 
being no jar is provided for such connectivity for version 11.1 Postgresql.

This challenge has become a show stopper for migration, please let us know 
whether any alternative are available for the same.
Regards
Nikhil



This E-Mail (including any attachments) may contain Confidential and/or legally 
privileged Information and is meant for the intended recipient(s) only. If you 
have received this e-mail in error and are not the intended recipient/s, kindly 
notify us at mailad...@vodafoneidea.com and then delete this e-mail immediately 
from your system. You are also hereby notified that any use, any form of 
reproduction, dissemination, copying, disclosure, modification, distribution 
and/or publication of this e-mail, its contents or its attachment/s other than 
by its intended recipient/s is strictly prohibited and may be unlawful. 
Internet Communications cannot be guaranteed to be secure or error-free as 
information could be delayed, intercepted, corrupted, lost, or may contains 
viruses. Vodafone Idea Limited does not accept any liability for any errors, 
omissions, viruses or computer shutdown (s) or any kind of disruption/denial of 
services if any experienced by any recipient as a result of this e-mail.


Re: Unavailability of Jar for connectivity in Postgres

2019-07-05 Thread Adrian Klaver

On 7/5/19 5:35 AM, Abraham, Nikhil (COR), Vodafone Idea wrote:

Dear Team,

Good Wishes.

We are planning to migrate one of our critical applications from Oracle 
to 11.1 Postgresql. The UI of the application calls lots of PL/SQL 
procedures. In order to have that calling facility there is a jar 
provided by the respective databases. Currently the ODBS-JDBC 
connectivity drivers are provided by Oracle in the form of jar. However 
connectivity from UI to call procedures in Postgresql is currently not 
possible in absence of such a driver. The reason being no jar is 
provided for such connectivity for version 11.1 Postgresql.


There is a JDBC driver for Postgres:

https://jdbc.postgresql.org/documentation/head/index.html

and it can call stored functions:

https://jdbc.postgresql.org/documentation/head/callproc.html

If this is not what you are looking for then can you be more specific 
about your needs?





This challenge has become a show stopper for migration, please let us 
know whether any alternative are available for the same.


Regards

Nikhil

This E-Mail (including any attachments) may contain Confidential and/or 
legally privileged Information and is meant for the intended 
recipient(s) only. If you have received this e-mail in error and are not 
the intended recipient/s, kindly notify us at 
mailad...@vodafoneidea.com and then delete this e-mail immediately from 
your system. You are also hereby notified that any use, any form of 
reproduction, dissemination, copying, disclosure, modification, 
distribution and/or publication of this e-mail, its contents or its 
attachment/s other than by its intended recipient/s is strictly 
prohibited and may be unlawful. Internet Communications cannot be 
guaranteed to be secure or error-free as information could be delayed, 
intercepted, corrupted, lost, or may contains viruses. Vodafone Idea 
Limited does not accept any liability for any errors, omissions, viruses 
or computer shutdown (s) or any kind of disruption/denial of services if 
any experienced by any recipient as a result of this e-mail.





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Vacuum and freeing dead rows

2019-07-05 Thread Ron

On 7/5/19 3:16 AM, Simon T wrote:

Hi,

I have a very heavily updated table in a Postgres 9.6.10 database with
lots of disk bloat. Every row is updated about once a minute, and
little to no inserts. Approx 18k rows total. The table has bloated
from ~1700 KB to about 6 GB over a few weeks time. I'm trying to
understand why vacuum hasn't made dead rows available for re-use.


[snip]

And in case it is relevant:

appdb=# SELECT pid, datname, usename, state, backend_xmin
appdb-# FROM pg_stat_activity
appdb-# WHERE backend_xmin IS NOT NULL
appdb-# ORDER BY age(backend_xmin) DESC;
   pid | datname | usename | state | backend_xmin
---+---+--+-+--
  10921 | appdb | app | idle*in transaction*  | 3501305052


"idle IN TRANSACTION" is never good.  Transactions should always be as short 
as possible.



  10919 | appdb | app | idle in transaction | 3501305052
  10916 | appdb | app | idle in transaction | 3501305052
  27935 | appdb | app | idle in transaction | 3501305052
  24500 | appdb | postgres | active | 3501305052
  10914 | appdb | app | active | 3501305052
  20671 | appdb | postgres | active | 3501305052
  11817 | appdb | app | active | 3501305052
   1988 | appdb | app | active | 3501305052
  15041 | appdb | postgres | active | 3501305052
   9916 | appdb | postgres | active | 3501305052
  10912 | appdb | app | idle in transaction | 3501305052
  10909 | appdb | app | idle in transaction | 3501305052
(13 rows)


Add backend_start to that query.  I'd kill any idle transactions are more 
than 30 minutes old.  (Of course, since they're "idle IN TRANSACTION", you'd 
lose stuff.


--
Angular momentum makes the world go 'round.


Re: Active connections are terminated because of small wal_sender_timeout

2019-07-05 Thread Tom Lane
ayaho...@ibagroup.eu writes:
> Do you have any thoughts regarding this issue? 

I do not think anybody thinks this is a bug.  Setting wal_sender_timeout
too small is a configuration mistake.

regards, tom lane




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Adrian Klaver

On 7/5/19 4:38 AM, PegoraroF10 wrote:

Well, I think it´ll not as easy as you said. That tables has dependencies.
So, if I try to alter type it gives me ERROR: cannot alter type of a column
used in a trigger definition. I dropped all Triggers of that table and it
gives me ERROR: cannot alter type of a column used by a view or rule.
Then, if I need to drop everything to change that thing I think is better
just have an empty structure and copy entire database data to it.

And why do we used domains, because we were Firebird and on that database a
integer or a domain based on an integer is the same, so we converted to
Postgres using that way.


That is not what the Firebird docs say:

https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-datatypes-custom.html

https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-domn.html

https://firebirdsql.org/file/documentation/release_notes/html/en/3_0/rnfb30-ddl-enhance.html



Thanks



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: PostgreSQL upgrade from 9.4.2 to 9.6.12

2019-07-05 Thread Michael Lewis
This may also be of interest to you, but checking official docs as Adrian
recommended is best.
https://why-upgrade.depesz.com/show?from=9.4.2&to=9.6.12&keywords=


xpath differences between postgres 11.4 and 10.3

2019-07-05 Thread Felipe de Jesús Molina Bravo
Hi!!


I have an strange behavior between 11.4 and 10.3 with xpath function:

With pgsql 10.3 i have:
  select  xpath('@idc', '' ) ;
  xpath
  
  {4776}
  (1 fila)

and with pgsql 11.4:
  select  xpath('@idc', ''::xml ) ;
   xpath
  ---
   {}
  (1 fila)

Now, if change the expression with "//":
select  xpath('//@idc', ''::xml ) ;
 xpath

 {4776}
(1 fila)

In release notes for postgresql 11 say:


   -

   *Correctly handle relative path expressions in xmltable(), xpath(), and
   other XML-handling functions (Markus Winand)*

   *Per the SQL standard, relative paths start from the document node of
   the XML input document, not the root node as these functions previously
   did.*

Despite this note I do not undertand why it behaves differently ... or
maybe I do not understand the note well ... what do you think?

thanks in advance!

regards!


Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
Adrian, on Firebird if you create a table or procedure with fields or params
declared with domains, they can be used with those domains or with their
base type. On Postgres I32 is not equal to integer.

create procedure myproc(id i32) returns(x i32) as ... 

select * from myproc(cast(5 as integer)) -- works on firebird. On Postgres
it doesn´t because that procedure or function expects for a I32 and not a
integer.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Tom Lane
PegoraroF10  writes:
> Adrian, on Firebird if you create a table or procedure with fields or params
> declared with domains, they can be used with those domains or with their
> base type. On Postgres I32 is not equal to integer.

> create procedure myproc(id i32) returns(x i32) as ... 

> select * from myproc(cast(5 as integer)) -- works on firebird. On Postgres
> it doesn´t because that procedure or function expects for a I32 and not a
> integer.

Really?

regression=# create domain i32 as int;
CREATE DOMAIN
regression=# create function myproc(id i32) returns i32 
language sql as 'select $1';
CREATE FUNCTION
regression=# select myproc(cast(5 as integer));
 myproc 

  5
(1 row)

I think the primary reason we don't allow domains over int for identity
columns is that it's not clear how domain constraints ought to interact
with the identity-value-generating behavior.

regards, tom lane




Re: Partitioning an existing table - pg10.6

2019-07-05 Thread legrand legrand
Hello,

Trying to do what I suggested, I understood it doesn't work ;o(

Logical replication can only work between two distinct databases,
and it seems that the replicated table name is the same as its source ...

Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Adrian Klaver

On 7/5/19 1:01 PM, PegoraroF10 wrote:

Adrian, on Firebird if you create a table or procedure with fields or params
declared with domains, they can be used with those domains or with their
base type. On Postgres I32 is not equal to integer.

create procedure myproc(id i32) returns(x i32) as ...

select * from myproc(cast(5 as integer)) -- works on firebird. On Postgres
it doesn´t because that procedure or function expects for a I32 and not a
integer.


Tom beat me to the example. Still the point is that I think you are 
going to find that there is not that much difference between Firebird 
and Postgres on this point.


To get back to the original case, is there a specific reason you want a 
IDENTITY column on the id field?







--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
Sorry, the example I was thinking was this one, which works on Firebird,
using its way of writing, obviously.

create function myproc(id integer) returns I32 language sql as 'select $1';

On postgres ERROR: return type mismatch in function declared to return i32

What I mean is that Firebird sees I32 and integer as the same, Postgres
doesn´t.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Partitioning an existing table - pg10.6

2019-07-05 Thread Michael Lewis
I have not personally used this, but the write-up seems solid to minimize
downtime to help you to shift data gradually. Be sure you understand the
limitations of partitioning, particularly when you are still on 10x not yet
on v11 where updates will shift a row to a new partition if the partition
key is updated.

https://www.depesz.com/2019/03/19/migrating-simple-table-to-partitioned-how/


Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Adrian Klaver

On 7/5/19 1:49 PM, PegoraroF10 wrote:

Sorry, the example I was thinking was this one, which works on Firebird,
using its way of writing, obviously.

create function myproc(id integer) returns I32 language sql as 'select $1';

On postgres ERROR: return type mismatch in function declared to return i32

What I mean is that Firebird sees I32 and integer as the same, Postgres
doesn´t.


Yeah, but if you reverse the casting you did in your first example it works:

create function myproc(id integer) returns I32 language sql as 'select 
$1::i32';

CREATE FUNCTION

test_(aklaver)> select myproc(5);
 myproc

  5
(1 row)

test_(aklaver)> select pg_typeof(myproc(5));
 pg_typeof
---
 i32
(1 row)





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread PegoraroF10
- Because we don´t need to give rigths to user on sequences;
- Nobody will change values of pk fields, because we would like to have
GENERATE ALWAYS on those PK Fields.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Adrian Klaver

On 7/5/19 1:55 PM, PegoraroF10 wrote:

- Because we don´t need to give rigths to user on sequences;
- Nobody will change values of pk fields, because we would like to have
GENERATE ALWAYS on those PK Fields.


An IDENTITY column is still backed by a sequence:

create table identity_test(id integer PRIMARY KEY GENERATED BY DEFAULT 
AS IDENTITY);


\ds identity_test_id_seq
 List of relations
 Schema | Name |   Type   |  Owner
+--+--+-
 public | identity_test_id_seq | sequence | aklaver

You end up with same thing as using a sequence(with some additional 
syntax over its behavior):


create table seq_id_test(id integer PRIMARY KEY);

create sequence seq_id_test_seq AS integer OWNED BY seq_id_test.id;

\ds seq_id_test_seq
   List of relations
 Schema |  Name   |   Type   |  Owner
+-+--+-
 public | seq_id_test_seq | sequence | aklaver


Rights are the same:

\c - production
You are now connected to database "test" as user "production".

test_(production)> insert into identity_test (id) values(default);
ERROR:  permission denied for table identity_test
test_(production)> insert into seq_id_test (id) values(default);
ERROR:  permission denied for table seq_id_test

A user can change the PK by using OVERRIDING SYSTEM VALUE in an INSERT.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Tom Lane
Adrian Klaver  writes:
> On 7/5/19 1:49 PM, PegoraroF10 wrote:
>> Sorry, the example I was thinking was this one, which works on Firebird,
>> using its way of writing, obviously.
>> create function myproc(id integer) returns I32 language sql as 'select $1';
>> 
>> On postgres ERROR: return type mismatch in function declared to return i32
>> What I mean is that Firebird sees I32 and integer as the same, Postgres
>> doesn´t.

> Yeah, but if you reverse the casting you did in your first example it works:
> create function myproc(id integer) returns I32 language sql as 'select 
> $1::i32';
> CREATE FUNCTION

Yeah.  This isn't an inherent property of Postgres, it's just that
SQL-language functions aren't defined to provide any implicit casting
of their results.  The given expression must yield exactly the declared
function result type.

Most other places in PG are laxer and will automatically perform
implicit (and maybe assignment) casts for you.  I don't remember
offhand whether there are good reasons for SQL functions to be
picky about this or it's just a shortage of round tuits.  I have
a vague feeling that there might be some compatibility issues
in there, though.

regards, tom lane




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Adrian Klaver

On 7/5/19 3:32 PM, Tom Lane wrote:

Adrian Klaver  writes:

On 7/5/19 1:49 PM, PegoraroF10 wrote:

Sorry, the example I was thinking was this one, which works on Firebird,
using its way of writing, obviously.
create function myproc(id integer) returns I32 language sql as 'select $1';

On postgres ERROR: return type mismatch in function declared to return i32
What I mean is that Firebird sees I32 and integer as the same, Postgres
doesn´t.



Yeah, but if you reverse the casting you did in your first example it works:
create function myproc(id integer) returns I32 language sql as 'select
$1::i32';
CREATE FUNCTION


Yeah.  This isn't an inherent property of Postgres, it's just that
SQL-language functions aren't defined to provide any implicit casting
of their results.  The given expression must yield exactly the declared
function result type.


Aah:

CREATE OR REPLACE FUNCTION public.domain_test(id integer)
 RETURNS i32
 LANGUAGE plpgsql
AS $function$
BEGIN
RETURN id;
END;
$function$
;

test=> select domain_test(5);
 domain_test
-
   5

test=> select pg_typeof(domain_test(5));
 pg_typeof
---
 i32
(1 row)

So it works in plpgsql.




Most other places in PG are laxer and will automatically perform
implicit (and maybe assignment) casts for you.  I don't remember
offhand whether there are good reasons for SQL functions to be
picky about this or it's just a shortage of round tuits.  I have
a vague feeling that there might be some compatibility issues
in there, though.

regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Converting to identity columns with domains on PK columns

2019-07-05 Thread Laurenz Albe
PegoraroF10 wrote:
> Domains on Postgres are really strange to me. Am I creating a domain which is
> exactly equal to integer, right ?
> 
> create domain i32 as integer;
> create domain T50 as varchar(50);
> 
> Create table MyTable(
> ID I32 not null primary key,
> Description T50);
> 
> Then, after inserts and updates done to that table, I want to convert that
> primary key to a identity column.
> 
> alter table MyTable alter ID add generated always as identity;
> 
> ERROR: identity column type must be smallint, integer, or bigint
> 
> So, What do I need do to create this identity column ?
> Why Postgres consider different I32 and integer ?

A domain is more than just a different name for a data type, so
the system doesn't treat them as identical.

Another way to proceed would be:

ALTER TABLE mytable ALTER id TYPE integer;
ALTER TABLE mytable ALTER id ADD GENERATED ALWAYS AS IDENTITY;

That would not rewrite the table, just "relabel" the type name
to "integer" and then convert it to an identity column.


Why do you want that extra level of obfuscation rather than
calling an integer an integer?

Yours,
Laurenz Albe

-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-05 Thread Gianni Ceccarelli
Aha! I had mis-understood how "strict"-ness works.

Thank you David for the explanation!

Thomas: the two main pieces are these:

> SQL null and json null are represented differently

As far as SQL is concerned, `'null'::jsonb` is a valid (non-`NULL`)
value. The SQL part of Postgres doesn't "look inside" the jsonb value,
the same way it doesn't "look inside" numbers or strings or
whatever. It only cares if they're `NULL` or not, and then it passes
them to functions and operators (ok, it does look at boolean values
for `WHERE` clauses, but even `ORDER BY` is handled by comparison
operators)

> strict functions with sql null inputs yield sql null output without
> even executing the function

So when the SQL-level executor sees a call to any function declared
strict with some NULL parameters, it doesn't call the function at
all. `whatever_my_function('a string',1234,NULL)` is always `NULL`

-- 
Dakkar - 
GPG public key fingerprint = A071 E618 DD2C 5901 9574
 6FE2 40EA 9883 7519 3F88
key id = 0x75193F88




Vacuum and freeing dead rows

2019-07-05 Thread Simon T
Hi,

I have a very heavily updated table in a Postgres 9.6.10 database with
lots of disk bloat. Every row is updated about once a minute, and
little to no inserts. Approx 18k rows total. The table has bloated
from ~1700 KB to about 6 GB over a few weeks time. I'm trying to
understand why vacuum hasn't made dead rows available for re-use.


appdb=# \d+ app.heartbeat;
 Table "app.heartbeat"
   Column|Type |   Modifiers   |
Storage  | Stats target | Description
-+-+---+--+--+-
 endpoint| uuid| not null  |
plain|  |
 record_timestamp| timestamp without time zone | not null  |
plain|  |
 heartbeat_timestamp | timestamp without time zone | not null  |
plain|  |
 outer_ip| character varying(46)   | not null  |
extended |  |
 inner_ip| character varying(46)   | not null  |
extended |  |
 last_notified   | timestamp without time zone | default now() |
plain|  |
Indexes:
"heartbeat_pkey" PRIMARY KEY, btree (endpoint)
"heartbeat_endpoint_idx" btree (endpoint)
Options: fillfactor=45


I reset the stats on the tuple and a few moments later ran:

SELECT pg_stat_reset_single_table_counters(40237);

SELECT c.oid, c.relname, c.relfrozenxid, c.relminmxid, c.relreplident,
relpages, reltuples, pg_stat_get_live_tuples(c.oid) AS n_live_tup
 , pg_stat_get_dead_tuples(c.oid) AS n_dead_tup, n_tup_ins,
n_tup_upd, n_tup_del, n_tup_hot_upd, n_mod_since_analyze,
last_analyze, last_autoanalyze, autovacuum_count, autoanalyze_count
FROM pg_class c, pg_stat_all_tables av where c.oid = av.relid and
c.relname = av.relname and c.relname = 'heartbeat' order by
av.relname;

  oid | relname | relfrozenxid | relminmxid | relreplident | relpages
| reltuples | n_live_tup | n_dead_tup | n_tup_ins | n_tup_upd |
n_tup_del | n_tup_hot_upd | n_mod_since_analyze | last_analyze |
last_autoanalyze | autovacuum_count | autoanalyze_count
---++--++--+--+---+++---+---+---+---+-+--+--+--+---
 40237 | heartbeat | 3451305052 | 55105127 | d | 674616 | 15494 | 0 |
6234 | 0 | 6234 | 0 | 3758 | 6234 | | | 0 | 0
(1 row)


I also found it interesting in n_tup_upd vs n_tup_hot_upd: HOT updates
were 3758, vs 6234 overall. I was expecting all updates to be HOT
since the PK isn't changing, just record values (timestamps, IPs), and
my fillfactor is low. Maybe not relevant but thought it worth
mentioning just in case, because both index and table is bloated.


So when I try vacuum it, the vacuum reports:

DETAIL: 55330122 dead row versions cannot be removed yet.
INFO: "hearbeat": found 0 removable, 55347745 nonremovable row
versions in 675230 out of 675230 pages

How can I determine what is preventing the dead rows from being
removed? Is there a way to identify what transaction if any is
preventing their re-use. Or am I misinterpreting the results?

Full vacuum log:

appdb=# vacuum verbose app.hearbeat;
INFO: vacuuming "app.hearbeat"
INFO: index "hearbeat_pkey" now contains 23435126 row versions in 112094 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.38s/0.40u sec elapsed 1.07 sec.
INFO: index "hearbeat_endpoint_idx" now contains 23435358 row versions
in 112185 pages
DETAIL: 0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 0.23s/0.35u sec elapsed 0.70 sec.
INFO: "hearbeat": found 0 removable, 55347745 nonremovable row
versions in 675230 out of 675230 pages
DETAIL: 55330122 dead row versions cannot be removed yet.
There were 199391 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 3.40s/11.46u sec elapsed 18.39 sec.
VACUUM

And in case it is relevant:

appdb=# SELECT pid, datname, usename, state, backend_xmin
appdb-# FROM pg_stat_activity
appdb-# WHERE backend_xmin IS NOT NULL
appdb-# ORDER BY age(backend_xmin) DESC;
  pid | datname | usename | state | backend_xmin
---+---+--+-+--
 10921 | appdb | app | idle in transaction | 3501305052
 10919 | appdb | app | idle in transaction | 3501305052
 10916 | appdb | app | idle in transaction | 3501305052
 27935 | appdb | app | idle in transaction | 3501305052
 24500 | appdb | postgres | active | 3501305052
 10914 | appdb | app | active | 3501305052
 20671 | appdb | postgres | active | 3501305052
 11817 | appdb | app | active | 3501305052
  1988 | appdb | app | active | 3501305052
 15041 | appdb | postgres | active | 3501305052
  9916 | appdb | postgres | active | 

Re: Why does jsonb_set() remove non-mentioned keys?

2019-07-05 Thread Thomas Kellerer
Gianni Ceccarelli schrieb am 05.07.2019 um 10:00:
>> strict functions with sql null inputs yield sql null output without
>> even executing the function
> 
> So when the SQL-level executor sees a call to any function declared
> strict with some NULL parameters, it doesn't call the function at
> all. `whatever_my_function('a string',1234,NULL)` is always `NULL`
> 

Ah, I see. Thanks for the clarification

Then I would question if declaring jsonb_set as "strict" makes sense

Thomas




Re: Error: rows returned by function are not all of the same row type

2019-07-05 Thread Andrey Sychev
Thank you very much for answering my question, Tom.
Yes,  I  have  always  assumed  that  returning  from function without
calling  SPI_freetuptable  is not good idea, but I do not know another
way to achieve same result.

As  I  am not expert in PostgreSQL internals and mostly work according
to official documentation, I never know about tuplestore before.

This look like pretty interesting idea.

I   have   searched   for   tuplestore   over  Internet and found some
topics, relative to subject:

1. https://postgres.cz/wiki/Iter%C3%A1tor_pole
2. https://www.postgresql.org/message-id/1073862553.1475.93.camel%40jeff
3.
https://www.postgresql.org/message-id/753432.21663.qm%40web65511.mail.ac4.yahoo.com
(with your comment)
4. 
http://web.mit.edu/ghudson/trac/attic/src/postgresql-7.4.5/contrib/tablefunc/tablefunc.c

I have examined this topics briefly and got some questions:

1. As I understand there are tests for SFRM_Materialize in code above.
In  my  case  a  caller  of my function is PL/pgSQL  procedure.
Does it accept this returning mode?

2. Our current production server versions is 9.1 and 9.6.
Do this versions support returning of tuplestore?

3. Currently my function defined as "RETURNS SETOF".
Does  definition  of  the function need to be changed if I rewrite code to
return tuplestore?


> Andrey Sychev  writes:
>> I have written C-language function that returns
>> multiple composite rows.
>> Generally function works as expected, but sometimes problem takes place.
>> At  rough  guess  the  problem  occurs  when  number of returning rows
>> relatively large (more than 100K - 1M).

> I do not think it's valid to return from your function with the SPI
> context still open.  Probably, it seems to accidentally sort of work
> as long as you don't return enough rows to cause the outer query to
> do anything interesting like spill to disk.

> Probably you should re-code this to execute just once and return
> a tuplestore.

> regards, tom lane

-- 
Best regards,

Andrey Sychev

andrey.syc...@cifrasoft.com