Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
>
> First thing I'd look at is whether it's the same execution environment
> in both cases, eg same search_path.
>

As far as I can tell, it's the same execution environment, same search_path
and same user.

I found after testing other situations, that the psql command would always
finish as expected after canceling the first query that ran too long.  I
was able to reproduce this scenario with psql and pgadmin4 with various
combinations.

Any suggestions on what that would indicate?  The canceled query does not
complete as there are no rows and no duplicate errors when running the
second time.

corey


pg_receivexlog or archive_command

2019-09-23 Thread Vikas Sharma
Hi,

I am wondering which one is the best way to archive the xlogs for Backup
and Recovery - pg_receivexlog or archive_command.

pg_receivexlog seems best suited because the copied/archived file is
streamed as it is being written to in xlog while archive_command only
copies when the WAL is fully written to.

Best wishes
Vikas


How to get timezone offset in timestamp with time zone AT TIME ZONE output.

2019-09-23 Thread Paul McGarry
Hi there,

Does anyone have a good way of doing:

=
select '2020-04-04 15:00:00+00'::timestamp with time zone AT TIME ZONE
'Australia/Sydney';
  timezone
-
 2020-04-05 02:00:00

select '2020-04-04 16:00:00+00'::timestamp with time zone AT TIME ZONE
'Australia/Sydney';
  timezone
-
 2020-04-05 02:00:00
=

but with the output including the offset, eg:
2020-04-05 02:00:00+11
2020-04-05 02:00:00+10
respectively, so it is clear which 2am it is (the times above are around a
DST switch)?


I have seen a couple of suggestions involving setting the desired time zone
in the db session environment, but my actual use case will be a bit more
complex, something like,


CREATE TABLE users (
user_id biginit,
user_timezone text, -- Eg 'Australia/Sydney','Asia/Hong_Kong'
);
CREATE TABLE data (
id bigint,
user_id bigint,
datetime timestamp with time zone,
);
INSERT INTO users (1,'Australia/Sydney');
INSERT INTO users (2,'Asia/Hong_Kong');
INSERT INTO data (5,1,'2020-04-05 02:00:00');
INSERT INTO data (6,2,'2020-04-05 02:00:00');

and I'll want to run a query like:

select id, datetime,
  datetime AT TIME ZONE (select user_timezone from users where
data.user_id=users.user_id) as usertime from data;


where I want the usertime to be returned in the corresponding users
timezone, but with the offset. Therefore whatever renders the offset needs
to be capable of doing it per row, independently of the server/session time
zone.

And to_char isn't much help:


select to_char('2020-04-04 15:00:00+00'::timestamp with time zone AT TIME
ZONE 'Australia/Sydney','-MM-DD HH24:MI:SSOF');
to_char

 2020-04-05 02:00:00+00

 because to_char only deals with a timestamp and loses the timezone info
and you end up with something very wrong.

Any ideas?

Thanks for any help.

Paul


Re: How to get timezone offset in timestamp with time zone AT TIME ZONE output.

2019-09-23 Thread Andrew Gierth
> "Paul" == Paul McGarry  writes:

 Paul> Hi there,
 Paul> Does anyone have a good way of doing:

 Paul> =
 Paul> select '2020-04-04 15:00:00+00'::timestamp with time zone AT TIME ZONE
 Paul> 'Australia/Sydney';
 Paul>   timezone
 Paul> -
 Paul>  2020-04-05 02:00:00

 Paul> select '2020-04-04 16:00:00+00'::timestamp with time zone AT TIME ZONE
 Paul> 'Australia/Sydney';
 Paul>   timezone
 Paul> -
 Paul>  2020-04-05 02:00:00
 Paul> =

 Paul> but with the output including the offset, eg:
 Paul> 2020-04-05 02:00:00+11
 Paul> 2020-04-05 02:00:00+10

This is ugly in some respects but minimizes the possible hazards (since
using a function-local SET clause ensures that the timezone is reset on
exit):

create function ts_to_char(t timestamptz, z text) returns text
  language plpgsql immutable
  set timezone = 'GMT'
  as $$
begin
  perform set_config('timezone', z, true);
  return t::text;
end;
$$;

select ts_to_char(timestamptz '2020-04-04 16:00:00+00', 'Australia/Sydney');
   ts_to_char   

 2020-04-05 02:00:00+10

-- 
Andrew (irc:RhodiumToad)




Re: pg_receivexlog or archive_command

2019-09-23 Thread Andreas Kretschmer




Am 23.09.19 um 10:25 schrieb Vikas Sharma:

Hi,

I am wondering which one is the best way to archive the xlogs for 
Backup and Recovery - pg_receivexlog or archive_command.


pg_receivexlog seems best suited because the copied/archived file is 
streamed as it is being written to in xlog while archive_command only 
copies when the WAL is fully written to.




you can use both of them, and you should consider "Barman".


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: How to represent a bi-directional list in db?

2019-09-23 Thread Francisco Olarte
Pankaj:


On Mon, Sep 23, 2019 at 4:07 AM Pankaj Jangid  wrote:
> Thanks. This resolved my problem of NULL/NOT NULL conflict. I wasn't
> aware that SERIAL is by default NOT NULL.

Not only that. Once you strip the annoying NOT NULL the only thing
remaining on a serial column is a "default nextval", which you
normally do not want ( you could think of populating the table in
creative ways, but they are on a different sequence than the one you
use for the ID column ).

> > Also, you may have problems populating this kind of table, as you will
> > not have the ids from either prev or next stage when building it.
> If NULL value is allowed I can fill it up with NULL initially. Right? Or
> is there something wrong here.

There is not, you can use (id,prev,next) = (1,null,null) and then
update, but you are going to need to travel up and down a lot, or
store a lot of data. If you use the trick I comment later of just
using "prev", you can do, on a table having (id=serial, prev=int),
build a sequence by doing "prev_id=null"; insert (id,prev,other_data)
returning id; copy return value to prev_id, rinse and repeat.

Also note that you can query the sequence AND advance it and then
insert all rows without default values.

> > And lastly, in SQL you do not really need a doubly linked list, just
> > populate prev_stage_id, and index it and you can query next stage of a
> > tuple using it.
> Could you please elaborate? Suppose I have this table,
> CREATE TABLE stages (
> id SERIAL PRIMARY KEY,
> name VARCHAR(80) NOT NULL,
> next_id INTEGER REFERENCE stages NULL,
> );
> What would be the backward query in that case? Forward is clear. This is
> forward query,
> SELECT name FROM stages WHERE next_id = 123;

No. That is a BACKWARDS QUERY. You are in row 123, you go BACK to its
preceedeing one.
If you need a traversable list containing (ListID, id,name) = x,1,A;
x,2,b; x,3;c ( I've added the ListId column to make it more
interesting/reallistic, you normally do not have a single table)
In sql you can build a (ListId, id, prev_id, name ) table ( PREV is
easier, as when you insert a row, in a normal application, you know
the previous one, but not the next one ) with the data
(x,1,null,a),(x,2,1,b),(x,3,2,c) ( the last one is a synthetic
sentinel and assumes nullable id), you can do it in a lot of ways.

To traverse it forward you just querying "select id where listid=x and
next_id is null" to locate the head (1), and then just go forward by
selecting with prev_id = last got id until you hit zero results.

To traverse backwards there are several ways. In the real cases I've
used I always had a "list row" where I could store the node for the
1st stage. In that cases i linked them circularly, (id=1, prev=3), so
bidirectional traversing was easy. Or you can use a special sentinel
node ( with a marker, like name=null). The thing is you locate the
last row, and then just query with id=last got prev_id. I do not
remember the details, but probably your "stages" are stages of
something which has a row, which can readily have a "first_stage_id"
or something similar.

Lists in tables are not the same as in C, where you directly store
pointers which point outwards. In this case any unique data serves as
a pointer, slow ( table scan ) by default, faster if you index the
column.

Anyway, unless you need the "linked list" functionality for something
( really heavy manipulation of large stage lists, splicing things
around ), I've normally found it's easier, in sql, to model this kind
of thing with a master-detail + order column.
( whatever = (id, , first_stage_id), stages=(id, order,  ) )


Francisco Olarte.




unable to drop index because it does not exists

2019-09-23 Thread Luca Ferrari
Hi,
running 11.5 I've got a partitioned table where I want to destroy an
index (on a column that has nothing to do with the partition):


respidb=# \d+ respi.root

Column|Type |
--+-+-
 sen_id   | integer |
 mis_flusso   | integer |
 mis_tasso| integer |
 mis_velocita | integer |
 mis_ora  | timestamp without time zone |
 pk   | bigint  |
 ts   | timestamp without time zone |
Partition key: LIST (date_part('year'::text, mis_ora))
Indexes:
"idx_root_sensore" btree (sen_id)
"idx_ts" btree (ts)
Partitions: respi.y2018 FOR VALUES IN ('2018'), PARTITIONED,
respi.y2019 FOR VALUES IN ('2019'), PARTITIONED,
respi.y2020 FOR VALUES IN ('2020'), PARTITIONED

respidb=# drop index idx_root_sensore;
ERROR:  index "idx_root_sensore" does not exist

Of course the index exists:

SELECT oid, relname, relkind FROM pg_class WHERE relname = 'idx_root_sensore';
-[ RECORD 1 ]-
oid | 40950
relname | idx_root_sensore
relkind | I


I already did a manual vacuum on the table.
Any idea?

Luca




Re: pg_receivexlog or archive_command

2019-09-23 Thread Luca Ferrari
On Mon, Sep 23, 2019 at 10:55 AM Andreas Kretschmer
 wrote:
> you can use both of them, and you should consider "Barman".

If I remember well Barman uses pg_receivexlog when streaming, and
archive_command when doing a "normal" backup.
Also pgbackrest is another good tool for backup.
The idea here should be not reinventing the wheel.

Luca




Re: pg_receivexlog or archive_command

2019-09-23 Thread Andreas Kretschmer




Am 23.09.19 um 13:44 schrieb Luca Ferrari:

On Mon, Sep 23, 2019 at 10:55 AM Andreas Kretschmer
 wrote:

you can use both of them, and you should consider "Barman".

If I remember well Barman uses pg_receivexlog when streaming, and
archive_command when doing a "normal" backup.


Barman < version 2 can only archive_command, version 2 and higher can both.


Also pgbackrest is another good tool for backup.
The idea here should be not reinventing the wheel.

ack.

Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: unable to drop index because it does not exists

2019-09-23 Thread Ron

On 9/23/19 6:39 AM, Luca Ferrari wrote:

Hi,
running 11.5 I've got a partitioned table where I want to destroy an
index (on a column that has nothing to do with the partition):


respidb=# \d+ respi.root

 Column|Type |
--+-+-
  sen_id   | integer |
  mis_flusso   | integer |
  mis_tasso| integer |
  mis_velocita | integer |
  mis_ora  | timestamp without time zone |
  pk   | bigint  |
  ts   | timestamp without time zone |
Partition key: LIST (date_part('year'::text, mis_ora))
Indexes:
 "idx_root_sensore" btree (sen_id)
 "idx_ts" btree (ts)
Partitions: respi.y2018 FOR VALUES IN ('2018'), PARTITIONED,
 respi.y2019 FOR VALUES IN ('2019'), PARTITIONED,
 respi.y2020 FOR VALUES IN ('2020'), PARTITIONED

respidb=# drop index idx_root_sensore;
ERROR:  index "idx_root_sensore" does not exist

Of course the index exists:

SELECT oid, relname, relkind FROM pg_class WHERE relname = 'idx_root_sensore';
-[ RECORD 1 ]-
oid | 40950
relname | idx_root_sensore
relkind | I


I already did a manual vacuum on the table.
Any idea?


Try specifying the schema name:

drop index respi.idx_root_sensore;


--
Angular momentum makes the world go 'round.




Re: unable to drop index because it does not exists

2019-09-23 Thread Luca Ferrari
On Mon, Sep 23, 2019 at 2:23 PM Ron  wrote:
> Try specifying the schema name:

Yeah, shame on me, I always forget the schema for indexes!

Thanks,
Luca




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Tom Lane
Corey Taylor  writes:
> I found after testing other situations, that the psql command would always
> finish as expected after canceling the first query that ran too long.  I
> was able to reproduce this scenario with psql and pgadmin4 with various
> combinations.

Well, that's just weird.

It's well known that the second run of a query can be much faster due
to having fully-populated caches to draw on, but you seem to have a
case that may go beyond that.  Maybe check for waiting on a lock?
It'd be useful to look in pg_stat_activity and/or top(1) while the
initial query is running, to see if it seems to be eating CPU or
is blocked on some condition.  (I forget how thorough the
wait_event coverage is in 9.6, but it does at least have those
columns.)

Can you create a self-contained test case that acts like this?

regards, tom lane




Re: How to represent a bi-directional list in db?

2019-09-23 Thread Pankaj Jangid
Francisco Olarte  writes:

>> Could you please elaborate? Suppose I have this table,
>> CREATE TABLE stages (
>> id SERIAL PRIMARY KEY,
>> name VARCHAR(80) NOT NULL,
>> next_id INTEGER REFERENCE stages NULL,
>> );
>> What would be the backward query in that case? Forward is clear. This is
>> forward query,
>> SELECT name FROM stages WHERE next_id = 123;
>
> No. That is a BACKWARDS QUERY. You are in row 123, you go BACK to its
> preceedeing one.
> If you need a traversable list containing (ListID, id,name) = x,1,A;
> x,2,b; x,3;c ( I've added the ListId column to make it more
> interesting/reallistic, you normally do not have a single table)
> In sql you can build a (ListId, id, prev_id, name ) table ( PREV is
> easier, as when you insert a row, in a normal application, you know
> the previous one, but not the next one ) with the data
> (x,1,null,a),(x,2,1,b),(x,3,2,c) ( the last one is a synthetic
> sentinel and assumes nullable id), you can do it in a lot of ways.
>
> To traverse it forward you just querying "select id where listid=x and
> next_id is null" to locate the head (1), and then just go forward by
> selecting with prev_id = last got id until you hit zero results.
>
> To traverse backwards there are several ways. In the real cases I've
> used I always had a "list row" where I could store the node for the
> 1st stage. In that cases i linked them circularly, (id=1, prev=3), so
> bidirectional traversing was easy. Or you can use a special sentinel
> node ( with a marker, like name=null). The thing is you locate the
> last row, and then just query with id=last got prev_id. I do not
> remember the details, but probably your "stages" are stages of
> something which has a row, which can readily have a "first_stage_id"
> or something similar.
>
> Lists in tables are not the same as in C, where you directly store
> pointers which point outwards. In this case any unique data serves as
> a pointer, slow ( table scan ) by default, faster if you index the
> column.
>
> Anyway, unless you need the "linked list" functionality for something
> ( really heavy manipulation of large stage lists, splicing things
> around ), I've normally found it's easier, in sql, to model this kind
> of thing with a master-detail + order column.
> ( whatever = (id, , first_stage_id), stages=(id, order,  ) )
>
Thanks a lot Francisco. This is great help.

My stages are stages of processes. So yes processes are also stored in a
table. I got the idea. I'll add another column in the processes table
which points to the first stage (first_stage_id). And quries

Forward pass:
1. select name from stages where id = 
2. select name from stages where prev_id = 
3. repeat (2)

Backward pass:
1. select name from stages where prev_id = 
2. select name from stages where id = 
3. repeat (2)

This is assuming I also create a circular list. I can also store
last_stage_id in the process table if we don't want to create circular
list in db.

Regards.
-- 
Pankaj Jangid




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Adrian Klaver

On 9/23/19 12:07 AM, Corey Taylor wrote:

First thing I'd look at is whether it's the same execution environment
in both cases, eg same search_path.


As far as I can tell, it's the same execution environment, same 
search_path and same user.


I found after testing other situations, that the psql command would 
always finish as expected after canceling the first query that ran too 
long.  I was able to reproduce this scenario with psql and pgadmin4 with 
various combinations.


Any suggestions on what that would indicate?  The canceled query does 
not complete as there are no rows and no duplicate errors when running 
the second time.


Can we see the actual function/query?

Also the schema of the table(s) involved?



corey



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to get timezone offset in timestamp with time zone AT TIME ZONE output.

2019-09-23 Thread Adrian Klaver

On 9/23/19 1:32 AM, Paul McGarry wrote:

Hi there,

Does anyone have a good way of doing:

=
select '2020-04-04 15:00:00+00'::timestamp with time zone AT TIME ZONE 
'Australia/Sydney';

       timezone
-
  2020-04-05 02:00:00

select '2020-04-04 16:00:00+00'::timestamp with time zone AT TIME ZONE 
'Australia/Sydney';

       timezone
-
  2020-04-05 02:00:00
=

but with the output including the offset, eg:
2020-04-05 02:00:00+11
2020-04-05 02:00:00+10
respectively, so it is clear which 2am it is (the times above are around 
a DST switch)?



I have seen a couple of suggestions involving setting the desired time 
zone in the db session environment, but my actual use case will be a bit 
more complex, something like,



CREATE TABLE users (
user_id biginit,
user_timezone text, -- Eg 'Australia/Sydney','Asia/Hong_Kong'
);
CREATE TABLE data (
id bigint,
user_id bigint,
datetime timestamp with time zone,
);
INSERT INTO users (1,'Australia/Sydney');
INSERT INTO users (2,'Asia/Hong_Kong');
INSERT INTO data (5,1,'2020-04-05 02:00:00');
INSERT INTO data (6,2,'2020-04-05 02:00:00');

and I'll want to run a query like:

select id, datetime,
   datetime AT TIME ZONE (select user_timezone from users where 
data.user_id=users.user_id) as usertime from data;



where I want the usertime to be returned in the corresponding users 
timezone, but with the offset. Therefore whatever renders the offset 
needs to be capable of doing it per row, independently of the 
server/session time zone.


This has come up before and the general suggestion has been to have a 
column for a naive(timestamp w/o tz) timestamp and a column for the 
timezone.  You are on the way there, only need to change the type of 
'datetime' field.




And to_char isn't much help:


select to_char('2020-04-04 15:00:00+00'::timestamp with time zone AT 
TIME ZONE 'Australia/Sydney','-MM-DD HH24:MI:SSOF');

         to_char

  2020-04-05 02:00:00+00

  because to_char only deals with a timestamp and loses the timezone 
info and you end up with something very wrong.


Any ideas?

Thanks for any help.

Paul



--
Adrian Klaver
adrian.kla...@aklaver.com




Autovacuum lock conflict

2019-09-23 Thread Christophe Escobar
Hi,

I am using PostgreSQL 9.6.13.

I have some questions about auto vacuums.

I find it quite hard to have some info about vacuum locking in the
documentation,
but from what I found running the command myself is that simple vacuum
will hold a Share Update Exclusive lock.

We are using auto vacuums on our database and recently we had an issue
when rolling out a new version of our solution.
The new version included a migration creating a new index for a big table.

We do try to have no maintenance when migrating, thus we generally use
CREATE INDEX CONCURRENTLY to avoid locking a table for reading and
writing.

We went across an issue with our index creation being locked by an
auto vacuum, as both processes hold a Share Update Exclusive lock.

Thankfully for us, we run our database migration with a small lock
timeout so it was not too dangerous, but we are left with some doubt
about our "smooth" process.

In these cases, we want to have our migration run in priority as it
blocks our release and everything that goes along.

I've read - but I cannot find it anywhere - that a vacuum process
"should" kill itself if it finds out that it is blocking any other
query;
but only if the notice "to prevent wraparound" is not present. (have I
missed official documentation somewhere..?)

We would like to have inputs on how to cleanly prevent these issues in
the future.

1) What's the actual answer about a vacuum killing itself if it blocks
a query? Is that true?
How fast is it to kill itself? As we have a 2s lock timeout, if it
takes 5s to check that it should kill itself, we won't benefit from
this behaviour.

2) What can we do to prevent any concurrent index creation from being
locked out by auto vacuums?
We have thought of a dirty solution consisting of:
- disabling auto vacuums before a migration
- kill current pids with a vacuum
- do our migration
- enable back auto vacuum.
We find this solution awful, error-prone and "complex" to have.
Are there any other way except having manuals vacuum being played
during the night?

Thanks in advance for your help,

Regards,

-- 
Christophe Escobar




Re: Autovacuum lock conflict

2019-09-23 Thread Adrian Klaver

On 9/23/19 8:09 AM, Christophe Escobar wrote:

Hi,

I am using PostgreSQL 9.6.13.

I have some questions about auto vacuums.

I find it quite hard to have some info about vacuum locking in the
documentation,
but from what I found running the command myself is that simple vacuum
will hold a Share Update Exclusive lock.

We are using auto vacuums on our database and recently we had an issue
when rolling out a new version of our solution.
The new version included a migration creating a new index for a big table.

We do try to have no maintenance when migrating, thus we generally use
CREATE INDEX CONCURRENTLY to avoid locking a table for reading and
writing.

We went across an issue with our index creation being locked by an
auto vacuum, as both processes hold a Share Update Exclusive lock.

Thankfully for us, we run our database migration with a small lock
timeout so it was not too dangerous, but we are left with some doubt
about our "smooth" process.

In these cases, we want to have our migration run in priority as it
blocks our release and everything that goes along.

I've read - but I cannot find it anywhere - that a vacuum process
"should" kill itself if it finds out that it is blocking any other
query;
but only if the notice "to prevent wraparound" is not present. (have I
missed official documentation somewhere..?)


It will be, courtesy of PostgreSQL Weekly News - September 22, 2019:

https://www.postgresql.org/message-id/20190922194743.GA24367%40fetter.org

Amit Kapila pushed:


- Doc: document autovacuum interruption. It's important users be able to 
know
  (without looking at the source code) that running DDL or DDL-like 
commands can
  interrupt autovacuum which can lead to a lot of dead tuples and hence 
slower

  database operations.  Reported-by: James Coleman Author: James Coleman
  Reviewed-by: Amit Kapila Backpatch-through: 9.4 Discussion:

https://postgr.es/m/CAAaqYe-XYyNwML1=f=gnd0qWg46PnvD=bdrcz5-l94b887x...@mail.gmail.com

https://git.postgresql.org/pg/commitdiff/82fa3ff867219a212a467317a77011df29cb5903

And the diff:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=70377cf4c6bf4eb4b2d1209752a300d5f3571145



We would like to have inputs on how to cleanly prevent these issues in
the future.

1) What's the actual answer about a vacuum killing itself if it blocks
a query? Is that true?
 How fast is it to kill itself? As we have a 2s lock timeout, if it
takes 5s to check that it should kill itself, we won't benefit from
this behaviour.

2) What can we do to prevent any concurrent index creation from being
locked out by auto vacuums?
We have thought of a dirty solution consisting of:
- disabling auto vacuums before a migration
- kill current pids with a vacuum
- do our migration
- enable back auto vacuum.
We find this solution awful, error-prone and "complex" to have.
Are there any other way except having manuals vacuum being played
during the night?

Thanks in advance for your help,

Regards,




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pg_auto_failover

2019-09-23 Thread Sonam Sharma
I have installed the PostgreSQL 11 from source.(untarred the file and then
make install as postgres user since we don't have root access) . Can you
please help in installing the extension for pg_auto_failover.. is it
possible as postgres user only ??

Thanks,
Sonam

On Wed, Sep 18, 2019, 8:31 PM Adrian Klaver 
wrote:

> On 9/18/19 7:56 AM, Sonam Sharma wrote:
> > Adrian,
> >
> > Postgres version : 10.7
> > Os version : Linux Redhat 7.4
>
> If you are installing from the PGDG repos then the package is there:
>
>
> https://yum.postgresql.org/10/redhat/rhel-7-x86_64/repoview/pg_auto_failover_10.html
>
>
> You can also go here:
>
> https://github.com/citusdata/pg_auto_failover
>
> and see your other options.
>
> >
> > On Wed, Sep 18, 2019, 8:01 PM Adrian Klaver  > > wrote:
> >
> > On 9/18/19 7:19 AM, Sonam Sharma wrote:
> >  > Can someone please guide me in installing pg_auto_failover..
> > where can I
> >  > get the rpm package for this ?
> >
> >
> > OS and version?
> >
> > Postgres version?
> >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Pg_auto_failover

2019-09-23 Thread Adrian Klaver

On 9/23/19 10:38 AM, Sonam Sharma wrote:
I have installed the PostgreSQL 11 from source.(untarred the file and 
then make install as postgres user since we don't have root access) . 
Can you please help in installing the extension for pg_auto_failover.. 
is it possible as postgres user only ??


Follow the install from source instructions here:

https://github.com/citusdata/pg_auto_failover



Thanks,
Sonam

On Wed, Sep 18, 2019, 8:31 PM Adrian Klaver > wrote:


On 9/18/19 7:56 AM, Sonam Sharma wrote:
 > Adrian,
 >
 > Postgres version : 10.7
 > Os version : Linux Redhat 7.4

If you are installing from the PGDG repos then the package is there:


https://yum.postgresql.org/10/redhat/rhel-7-x86_64/repoview/pg_auto_failover_10.html


You can also go here:

https://github.com/citusdata/pg_auto_failover

and see your other options.

 >
 > On Wed, Sep 18, 2019, 8:01 PM Adrian Klaver
mailto:adrian.kla...@aklaver.com>
 > >> wrote:
 >
 >     On 9/18/19 7:19 AM, Sonam Sharma wrote:
 >      > Can someone please guide me in installing pg_auto_failover..
 >     where can I
 >      > get the rpm package for this ?
 >
 >
 >     OS and version?
 >
 >     Postgres version?
 >
 >
 >
 >     --
 >     Adrian Klaver
 > adrian.kla...@aklaver.com 
>
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: unable to drop index because it does not exists

2019-09-23 Thread Michael Lewis
>
> Partition key: LIST (date_part('year'::text, mis_ora))
>

As an aside, you may benefit from switching to range partitioning*
depending on how your queries are written. If you have conditions such as
"WHERE mis_ora BETWEEN CURRENT_DATE - 30 AND CURRENT_DATE" or similar, then
the fact that your partitioning is defined as a function result will mean
all partitions gets scanned instead of partitioned being pruned as early as
possible in the process. That's my understanding anyway. If you always
include date_part( 'year', mis_ora) comparison in your where/join
conditions, then you'll likely be just fine. Do as you need.

*eg '01/01/2018' to '01/01/2019' for the 2018 partition since upper bound
is always exclusive


Re: Autovacuum lock conflict

2019-09-23 Thread Laurenz Albe
Christophe Escobar wrote:
> We do try to have no maintenance when migrating, thus we generally
> use CREATE INDEX CONCURRENTLY to avoid locking a table for reading
> and writing.
> 
> We went across an issue with our index creation being locked by an
> auto vacuum, as both processes hold a Share Update Exclusive lock.
> 
> Thankfully for us, we run our database migration with a small lock
> timeout so it was not too dangerous, but we are left with some doubt
> about our "smooth" process.
> 
> In these cases, we want to have our migration run in priority as it
> blocks our release and everything that goes along.
> 
> I've read - but I cannot find it anywhere - that a vacuum process
> "should" kill itself if it finds out that it is blocking any other
> query;
> but only if the notice "to prevent wraparound" is not present. (have
> I
> missed official documentation somewhere..?)
> 
> We would like to have inputs on how to cleanly prevent these issues
> in the future.
> 
> 1) What's the actual answer about a vacuum killing itself if it
> blocks
> a query? Is that true?
> How fast is it to kill itself? As we have a 2s lock timeout, if
> it
> takes 5s to check that it should kill itself, we won't benefit from
> this behaviour.

It must be an anti-wraparound VACUUM, because a normal autovacuum would
terminate after about 1 second ("deadlock_timeout" parameter).

> 2) What can we do to prevent any concurrent index creation from being
> locked out by auto vacuums?
> We have thought of a dirty solution consisting of:
> - disabling auto vacuums before a migration
> - kill current pids with a vacuum
> - do our migration
> - enable back auto vacuum.
> We find this solution awful, error-prone and "complex" to have.
> Are there any other way except having manuals vacuum being played
> during the night?

Yes, that is ugly and not a good idea.  Unless done very carefully,
this has the potential to harm your database.  VACUUM is important.

I would make sure that all tables are vacuumed in a timely fashion.

- Avoid explicit table locks or other operations that would
  block autovacuum (and so prevent it, because it will terminate).

- For tables that receive UPDATEs and DELETEs, make sure that
  "autovacuum_vacuum_cost_delay" is small enough so that autovacuum
  finishes quickly.

- For tables that receive only INSERTs, schedule a regular VACUUM
  with "cron" or similar.  Unfortunately, PostgreSQL isn't very smart
  about vacuuming insert-only tables.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: pg_receivexlog or archive_command

2019-09-23 Thread Peter Eisentraut
On 2019-09-23 10:25, Vikas Sharma wrote:
> I am wondering which one is the best way to archive the xlogs for Backup
> and Recovery - pg_receivexlog or archive_command.

I recommend using pg_receivexlog.  It has two important advantages over
archive_command:  1) You can have multiple instances of pg_receivexlog
running and copying things to different places.  This is complicated to
do correctly with archive_command.  2) pg_receivexlog will fsync the
files it writes.  This is also complicated to do correctly with
archive_command.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Error during analyze after upgrade from 10.10 -> 11.4

2019-09-23 Thread Ben Snaidero
Hi,

I get the following error after upgrading from Postgres 10.10 -> Postgres
11.4

After upgrade completes when running "vacuumdb -p 5432 -U postgres -a -Z
--analyze-in-stages" I get the following error:


* vacuumdb: processing database "#DBNAME#": Generating minimal optimizer
statistics (1 target) vacuumdb: vacuuming of database "#DBNAME#" failed:
ERROR:  could not access status of transaction 6095   DETAIL:  Could not
open file "pg_xact/": No such file or directory.*

After logging into the database using psql and running "VACUUM FULL
#table#" for each table I can see it's an issue with pg_statistic







*Running VACUUM FULL on table pg_statistic  psql.exe : ERROR:  could
not access status of transaction 6095At line:23 char:9   +
psql.exe -d $db -c "vacuum full $table"   +
~~~+ CategoryInfo  :
NotSpecified: (ERROR:  could n...ransaction 6095:String) [],
RemoteException+ FullyQualifiedErrorId : NativeCommandError*

* DETAIL:  Could not open file "pg_xact/": No such file or directory.*

Any ideas as to how this could have happened or how I can fix this issue?
I had the issue once on another server and running VACUUM FULL on entire
database fixed the error.

Note: I ran "VACUUM FREEZE ANALYZE" before the upgrade on every database
and there were no errors.


Ben Snaidero
*Geotab*
Senior Database Specialist
Direct +1 (289) 230-7749
Toll-free +1 (877) 436-8221
Visit www.geotab.com
Twitter  | Facebook
 | YouTube
 | LinkedIn



can't install pg 12 beta on centos 6

2019-09-23 Thread Kevin Brannen
I thought I’d get a jump on testing this since we’re a little slow sometimes. 😊

I’ve spun up a new VM with Centos 6.10 (the latest). I found 
https://yum.postgresql.org/testing/12/redhat/rhel-6-x86_64/repoview/ to get the 
packages (BTW, that was kind of hard to find). I ended up with:

postgresql12-12beta4-1PGDG.rhel6.x86_64.rpm
postgresql12-contrib-12beta4-1PGDG.rhel6.x86_64.rpm
postgresql12-libs-12beta4-1PGDG.rhel6.x86_64.rpm
postgresql12-plperl-12beta4-1PGDG.rhel6.x86_64.rpm
postgresql12-server-12beta4-1PGDG.rhel6.x86_64.rpm

Then I did an “rpm -ivh *” and found I was missing a few things. I downloaded 
and installed most of what was missing, but I’ve got 2 dependencies that I 
can’t resolve.

error: Failed dependencies:
python2-libs is needed by postgresql12-12beta4-1PGDG.rhel6.x86_64
openssl-libs >= 1.0.2k is needed by 
postgresql12-libs-12beta4-1PGDG.rhel6.x86_64

Openssl is at “1.0.1e” from CentOS. So where does this “1.0.2k” come from?

Centos 6.10 has nothing called python2-libs, though I do have:

$ rpm -qa | grep python | grep libs
python-libs-2.6.6-66.el6_8.x86_64

It *feels* like the packager of the PG rpms took a dependency list from RH 7.x 
and gave it to RH 6.x.

Since this is only a test on my part, I wanted to use the community packages 
instead of building from source. Am I going to have to build from source at 
this point in the game until we get to at least the RC’s?

Thanks,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Error during analyze after upgrade from 10.10 -> 11.4

2019-09-23 Thread Tom Lane
Ben Snaidero  writes:
> After upgrade completes when running "vacuumdb -p 5432 -U postgres -a -Z
> --analyze-in-stages" I get the following error:

> * vacuumdb: processing database "#DBNAME#": Generating minimal optimizer
> statistics (1 target) vacuumdb: vacuuming of database "#DBNAME#" failed:
> ERROR:  could not access status of transaction 6095   DETAIL:  Could not
> open file "pg_xact/": No such file or directory.*

> After logging into the database using psql and running "VACUUM FULL
> #table#" for each table I can see it's an issue with pg_statistic

If it's only pg_statistic then you're in luck, because all the data
in that is rebuildable.  You can try "delete from pg_statistic",
and then "vacuum full pg_statistic".  I'm not sure that will work
though, it may hit the same problem.  If so, you need a bigger
hammer: "TRUNCATE pg_statistic" will fix it, but you'll need to
stop the server and restart with allow_system_table_mods enabled
to be allowed to do that.  (Turn allow_system_table_mods back
off afterwards!)

Once you've got an empty pg_statistic, run "ANALYZE;" (as superuser)
to rebuild all the stats.

Hard to tell what the underlying issue is here --- perhaps pg_upgrade
messed up, but it would take some detailed investigation to find out.

regards, tom lane




RE: citext, actually probably using extensions

2019-09-23 Thread Kevin Brannen
> From: Morris de Oryx 
>
> Not sure about best practices, but what I'm going is like this:
>
> * Create a schema named extensions.
> * Install extensions in this special schema only. I don't put anything else 
> in there.
> * Put the extensions schema early (left) in the search_path for each role.
> * Grant execute access permissively on the functions in that schema.
>
> If there's something deeply flawed about this strategy, I'd be keen to hear 
> about it. On the positive side, I find it simple to understand, maintain, and 
> explain to other people. YMMV
>

We do approximately the same thing if it makes you feel better, although we 
call our schema "common".
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: can't install pg 12 beta on centos 6

2019-09-23 Thread Adrian Klaver

On 9/23/19 12:04 PM, Kevin Brannen wrote:
I thought I’d get a jump on testing this since we’re a little slow 
sometimes. 😊


I’ve spun up a new VM with Centos 6.10 (the latest). I found 
https://yum.postgresql.org/testing/12/redhat/rhel-6-x86_64/repoview/ to 
get the packages (BTW, that was kind of hard to find). I ended up with:


You might want to take a look at:

https://people.planetpostgresql.org/devrim/index.php?/archives/101-Installing-PostgreSQL-12-betaRC-on-RHELCentOSFedora.html

and change to RH/CentOS 6.




postgresql12-12beta4-1PGDG.rhel6.x86_64.rpm

postgresql12-contrib-12beta4-1PGDG.rhel6.x86_64.rpm

postgresql12-libs-12beta4-1PGDG.rhel6.x86_64.rpm

postgresql12-plperl-12beta4-1PGDG.rhel6.x86_64.rpm

postgresql12-server-12beta4-1PGDG.rhel6.x86_64.rpm

Then I did an “rpm -ivh *” and found I was missing a few things. I 
downloaded and installed most of what was missing, but I’ve got 2 
dependencies that I can’t resolve.


error: Failed dependencies:

     python2-libs is needed by postgresql12-12beta4-1PGDG.rhel6.x86_64

     openssl-libs >= 1.0.2k is needed by 
postgresql12-libs-12beta4-1PGDG.rhel6.x86_64


Openssl is at “1.0.1e” from CentOS. So where does this “1.0.2k” come from?

Centos 6.10 has nothing called python2-libs, though I do have:

$ rpm -qa | grep python | grep libs

python-libs-2.6.6-66.el6_8.x86_64

It **feels** like the packager of the PG rpms took a dependency list 
from RH 7.x and gave it to RH 6.x.


Since this is only a test on my part, I wanted to use the community 
packages instead of building from source. Am I going to have to build 
from source at this point in the game until we get to at least the RC’s?


Thanks,

Kevin

This e-mail transmission, and any documents, files or previous e-mail 
messages attached to it, may contain confidential information. If you 
are not the intended recipient, or a person responsible for delivering 
it to the intended recipient, you are hereby notified that any 
disclosure, distribution, review, copy or use of any of the information 
contained in or attached to this message is STRICTLY PROHIBITED. If you 
have received this transmission in error, please immediately notify us 
by reply e-mail, and destroy the original transmission and its 
attachments without reading them or saving them to disk. Thank you.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Autovacuum lock conflict

2019-09-23 Thread Michael Lewis
> - For tables that receive only INSERTs, schedule a regular VACUUM
>   with "cron" or similar.  Unfortunately, PostgreSQL isn't very smart
>   about vacuuming insert-only tables.
>


What is the need to vacuum on an insert only table? Does that just maintain
the freespace map?


Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
On Mon, Sep 23, 2019 at 8:57 AM Tom Lane  wrote:

> Maybe check for waiting on a lock?
> It'd be useful to look in pg_stat_activity and/or top(1) while the
> initial query is running, to see if it seems to be eating CPU or
> is blocked on some condition.
>

I think this will provide information that makes it seem less weird.

With your suggestion, I monitored postgres via top and pg_stat_activity in
the various scenarios and found that an autovacuum was triggering which
covered tables used in the insert into select.  What seems to be happening
is the autovacuum takes just about the same time as I give the query to run
before giving up on it.  The next time I run the query, the autovacuum is
complete and the query runs normally.

Of course, I'd like to understand why the query never finishes.

when autovacuum is running:
PID  PPID USER STAT   VSZ %VSZ CPU %CPU COMMAND
   38 1 postgres R 158m   8%   1  50% postgres: revwaste
portal-local 172.19.0.5(39956) SELECT
   36 1 postgres S 171m   9%   0   0% postgres: autovacuum worker
process   portal-local
   34 1 postgres S 186m   9%   0   0% postgres: autovacuum worker
process   portal-local

after autovacuum finishes and during remaining soft-lock:
   PID  PPID USER STAT   VSZ %VSZ CPU %CPU COMMAND
   45 1 postgres R 259m  13%   0  50% postgres: revwaste
portal-local 172.19.0.5(39962) SELECT
   20 1 postgres S 153m   8%   1   0% postgres: writer process
   22 1 postgres S 153m   8%   0   0% postgres: autovacuum launcher
process

Same for pg_stat_activity:

| backend_start |  xact_start   |
 query_start  | state_change  | wait_event_type |
wait_event | state  | backend_xid | backend_xmin |   query
---+--+-+--+--+--+-+-+-+---+---+---+--
| 2019-09-23 20:29:45.127527+00 | 2019-09-23 20:29:45.12886+00  |
2019-09-23 20:29:45.12886+00  | 2019-09-23 20:29:45.128861+00 |
|| active | 808 |  808 | select import_wss()
| 2019-09-23 20:30:01.624853+00 | 2019-09-23 20:30:58.047317+00 |
2019-09-23 20:30:58.047317+00 | 2019-09-23 20:30:58.047318+00 |
|| active | |  808 | autovacuum:
ANALYZE wss.RowCoding
| 2019-09-23 20:31:01.644824+00 | 2019-09-23 20:31:01.666711+00 |
2019-09-23 20:31:01.666711+00 | 2019-09-23 20:31:01.666712+00 |
|| active | |  808 | autovacuum:
ANALYZE wss.WSSData
| 2019-09-23 20:31:14.101808+00 | 2019-09-23 20:31:14.103306+00 |
2019-09-23 20:31:14.103306+00 | 2019-09-23 20:31:14.103307+00 |
|| active | |  808 | select * from
pg_stat_activity
(4 rows)

| backend_start |  xact_start   |
 query_start  | state_change  | wait_event_type |
wait_event | state  | backend_xid | backend_xmin | query
---+--+-+--+--+--+-+-+-+---+---+---+---
| 2019-09-23 20:29:45.127527+00 | 2019-09-23 20:29:45.12886+00  |
2019-09-23 20:29:45.12886+00  | 2019-09-23 20:29:45.128861+00 |
|| active | 808 |  808 | select import_wss()
| 2019-09-23 20:34:21.01283+00  | 2019-09-23 20:34:21.014473+00 |
2019-09-23 20:34:21.014473+00 | 2019-09-23 20:34:21.014475+00 |
|| active | |  808 | select * from
pg_stat_activity
(2 rows)


> Can you create a self-contained test case that acts like this?
>

I can try to duplicate it if this doesn't provide the details needed.

corey


RE: can't install pg 12 beta on centos 6

2019-09-23 Thread Kevin Brannen
> Adrian Klaver wrote:
> On 9/23/19 12:04 PM, Kevin Brannen wrote:
> > I thought I’d get a jump on testing this since we’re a little slow
> > sometimes. ??
> >
> > I’ve spun up a new VM with Centos 6.10 (the latest). I found
> > https://yum.postgresql.org/testing/12/redhat/rhel-6-x86_64/repoview/
> > to get the packages (BTW, that was kind of hard to find). I ended up with:
>
> You might want to take a look at:
>
> https://people.planetpostgresql.org/devrim/index.php?/archives/101-Installing-PostgreSQL-12-betaRC-on-RHELCentOSFedora.html
>
> and change to RH/CentOS 6.


Thanks for the idea; however, that didn't work.


$ sudo yum -y install 
https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
Loaded plugins: fastestmirror, security
Setting up Install Process
epel-release-latest-6.noarch.rpm
|  14 kB 00:00
Examining /var/tmp/yum-root-aZqbqq/epel-release-latest-6.noarch.rpm: 
epel-release-6-8.noarch
/var/tmp/yum-root-aZqbqq/epel-release-latest-6.noarch.rpm: does not update 
installed package.
Error: Nothing to do


It looks like they're not ready for Centos 6 yet and I'm not ready for 8.0 -- 
well, it's not even released until tomorrow and I'm sure that's a whole 'nother 
battle. I'll just grab the source and give that a try.

Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
On Mon, Sep 23, 2019 at 9:22 AM Adrian Klaver 
wrote:

> Can we see the actual function/query?
>
> Also the schema of the table(s) involved?
>

Of course.  This was added to pull some data from legacy db tables into
something we could query while converting the process that populates the
legacy db. These are rather long.  I'll spare you the details of the other
tables it joins as they are basically id <=> name pairs for the most part.

This isn't meant to run often, but it does need to be run every time the
legacy db data is pulled over.

Import Function:

CREATE OR REPLACE FUNCTION import_wss()
RETURNS void AS $$
INSERT INTO wss_entries (
is_historical,
historical_path,
wss_import_row_index,
service_date,
original_sid,
client_id,
client_site_id,
material_group_id,
material_category_id,
material_id,
material_outcome_id,
targeted,
goal_percent,
weight_pounds,
old_vendor_name,
new_vendor_name,
vendor_id,
vendor_site_id,
old_service_type,
new_service_type,
old_quantity,
old_size,
old_frequency,
old_price,
old_market_index,
old_service_per_month,
old_units_per_haul,
old_avg_monthly_cost,
new_quantity,
new_size,
new_frequency,
new_price,
new_market_index,
new_service_per_month,
new_units_per_haul,
new_avg_monthly_cost,
is_haul,
haul_unit_id,
service_conversion_id,
num_hauls_per_weight,
compaction_ratio,
unit_weight,
full_percent,
benchmark_hauling_cost,
total_monthly_cost,
gross_savings,
month_of_shared_savings,
wr_fees,
net_savings,
new_account_number,
contract_expiration,
scheduled_service_days
) SELECT
true,
w."Path",
w."RowNum",
w."WSSDate"::date,
CASE
WHEN client_sites.id IS null THEN TRIM(w."SID")
ELSE null
END,
client_sites.client_id,
client_sites.id,
material_groups.id,
material_categories.id,
materials.id,
material_outcomes.id,
w."Targeted" = 'True',
w."Goal Percentage",
w."Total Pounds",
NULLIF(w."Old Vendor ", ''),
NULLIF(w."New Vendor", ''),
vendor_sites.vendor_id,
vendor_sites.id,
NULLIF(w."Old Service Description", ''),
NULLIF(w."New Service Description", ''),
NULLIF(w."Old Quan", ''),
NULLIF(w."Old Size", ''),
NULLIF(w."Old Freq", ''),
CASE
WHEN w."Old Price " ~ '^[0-9\-]+[0-9\.\-E]*$' THEN w."Old Price
"::float
ELSE 0::float
END,
CASE
WHEN w."Old Market Index" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN w."Old
Market Index"::float
ELSE 0::float
END,
CASE
WHEN w."Old Service Per Month" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN
w."Old Service Per Month"::float
ELSE 0::float
END,
CASE
WHEN w."Old Tons Per Haul" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN
w."Old Tons Per Haul"::float
ELSE 0::float
END,
CASE
WHEN w."Old Monthly Cost" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN w."Old
Monthly Cost"::float
ELSE 0::float
END,
NULLIF(w."New Quan", ''),
NULLIF(w."New Size", ''),
NULLIF(w."New Freq", ''),
CASE
WHEN w."New Price " ~ '^[0-9\-]+[0-9\.\-E]*$' THEN w."New Price
"::float
ELSE 0::float
END,
CASE
WHEN w."New Market Index" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN w."New
Market Index"::float
ELSE 0::float
END,
CASE
WHEN w."New Service Per Month" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN
w."New Service Per Month"::float
ELSE 0::float
END,
CASE
WHEN w."New Tons Per haul" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN
w."New Tons Per haul"::float
ELSE 0::float
END,
CASE
WHEN w."New Monthly Cost" ~ '^[0-9\-]+[0-9\.\-E]*$' THEN w."New
Monthly Cost"::float
ELSE 0::float
END,
r."LineType" = 'Haul',
haul_units.id,
service_conversions.id,
CASE
WHEN w."hauls coorelating to this weight" ~
'^[0-9\-]+[0-9\.\-E]*$' THEN w."hauls coorelating to this weight"::float
ELSE 0::float
END,
NULLIF(w."Estimated Compaction Ratio", ''),
CASE
WHEN w."Unit weight assigned to waste" ~
'^[0-9\-]+[0-9\.\-E]*$' THEN w."Unit weight assigned to waste"::float
ELSE 0::float
END,
CASE
WHEN w."Pct Full based on last survey" ~
'^[0-9\-]+[0-9\.\-E]*$' THEN w."Pct Full based on last survey"::float
ELSE 0::float
END,
CASE
   

Re: can't install pg 12 beta on centos 6

2019-09-23 Thread Adrian Klaver

On 9/23/19 2:00 PM, Kevin Brannen wrote:

Adrian Klaver wrote:
On 9/23/19 12:04 PM, Kevin Brannen wrote:

I thought I’d get a jump on testing this since we’re a little slow
sometimes. ??

I’ve spun up a new VM with Centos 6.10 (the latest). I found
https://yum.postgresql.org/testing/12/redhat/rhel-6-x86_64/repoview/
to get the packages (BTW, that was kind of hard to find). I ended up with:


You might want to take a look at:

https://people.planetpostgresql.org/devrim/index.php?/archives/101-Installing-PostgreSQL-12-betaRC-on-RHELCentOSFedora.html

and change to RH/CentOS 6.



Thanks for the idea; however, that didn't work.


Well the below is just one part of the above.

Did you:

etc/yum.repos.d/pgdg-redhat-all.repo

[pgdg12-updates-testing]

enabled=1

and then:

yum -y install postgresql12-server postgresql12-contrib

From your original post I gathered that you went here:

https://yum.postgresql.org/testing/12/redhat/rhel-6-x86_64/repoview/

and grabbed individual packages. Is that correct?




$ sudo yum -y install 
https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
Loaded plugins: fastestmirror, security
Setting up Install Process
epel-release-latest-6.noarch.rpm
|  14 kB 00:00
Examining /var/tmp/yum-root-aZqbqq/epel-release-latest-6.noarch.rpm: 
epel-release-6-8.noarch
/var/tmp/yum-root-aZqbqq/epel-release-latest-6.noarch.rpm: does not update 
installed package.
Error: Nothing to do


It looks like they're not ready for Centos 6 yet and I'm not ready for 8.0 -- 
well, it's not even released until tomorrow and I'm sure that's a whole 'nother 
battle. I'll just grab the source and give that a try.

Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Adrian Klaver

On 9/23/19 1:58 PM, Corey Taylor wrote:
On Mon, Sep 23, 2019 at 8:57 AM Tom Lane > wrote:


Maybe check for waiting on a lock?
It'd be useful to look in pg_stat_activity and/or top(1) while the
initial query is running, to see if it seems to be eating CPU or
is blocked on some condition.


I think this will provide information that makes it seem less weird.

With your suggestion, I monitored postgres via top and pg_stat_activity 
in the various scenarios and found that an autovacuum was triggering 
which covered tables used in the insert into select.  What seems to be 
happening is the autovacuum takes just about the same time as I give the 
query to run before giving up on it.  The next time I run the query, the 
autovacuum is complete and the query runs normally.


Hmm, are there triggers on wss_entries that are UPDATEing/DELETEing 
entries elsewhere?





Of course, I'd like to understand why the query never finishes.

when autovacuum is running:
PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    38     1 postgres R     158m   8%   1  50% postgres: revwaste 
portal-local 172.19.0.5(39956) SELECT
    36     1 postgres S     171m   9%   0   0% postgres: autovacuum 
worker process   portal-local
    34     1 postgres S     186m   9%   0   0% postgres: autovacuum 
worker process   portal-local


after autovacuum finishes and during remaining soft-lock:
    PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    45     1 postgres R     259m  13%   0  50% postgres: revwaste 
portal-local 172.19.0.5(39962) SELECT

    20     1 postgres S     153m   8%   1   0% postgres: writer process
    22     1 postgres S     153m   8%   0   0% postgres: autovacuum 
launcher process


Same for pg_stat_activity:

|         backend_start         |          xact_start           |   
    query_start          |         state_change          | 
wait_event_type | wait_event | state  | backend_xid | backend_xmin | 
           query

---+--+-+--+--+--+-+-+-+---+---+---+--
| 2019-09-23 20:29:45.127527+00 | 2019-09-23 20:29:45.12886+00  | 
2019-09-23 20:29:45.12886+00  | 2019-09-23 20:29:45.128861+00 | 
         |            | active |         808 |          808 | select 
import_wss()
| 2019-09-23 20:30:01.624853+00 | 2019-09-23 20:30:58.047317+00 | 
2019-09-23 20:30:58.047317+00 | 2019-09-23 20:30:58.047318+00 | 
         |            | active |             |          808 | 
autovacuum: ANALYZE wss.RowCoding
| 2019-09-23 20:31:01.644824+00 | 2019-09-23 20:31:01.666711+00 | 
2019-09-23 20:31:01.666711+00 | 2019-09-23 20:31:01.666712+00 | 
         |            | active |             |          808 | 
autovacuum: ANALYZE wss.WSSData
| 2019-09-23 20:31:14.101808+00 | 2019-09-23 20:31:14.103306+00 | 
2019-09-23 20:31:14.103306+00 | 2019-09-23 20:31:14.103307+00 | 
         |            | active |             |          808 | select * 
from pg_stat_activity

(4 rows)

|         backend_start         |          xact_start           |   
    query_start          |         state_change          | 
wait_event_type | wait_event | state  | backend_xid | backend_xmin | 
         query

---+--+-+--+--+--+-+-+-+---+---+---+---
| 2019-09-23 20:29:45.127527+00 | 2019-09-23 20:29:45.12886+00  | 
2019-09-23 20:29:45.12886+00  | 2019-09-23 20:29:45.128861+00 | 
         |            | active |         808 |          808 | select 
import_wss()
| 2019-09-23 20:34:21.01283+00  | 2019-09-23 20:34:21.014473+00 | 
2019-09-23 20:34:21.014473+00 | 2019-09-23 20:34:21.014475+00 | 
         |            | active |             |          808 | select * 
from pg_stat_activity

(2 rows)

Can you create a self-contained test case that acts like this?


I can try to duplicate it if this doesn't provide the details needed.

corey



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
On Mon, Sep 23, 2019 at 4:31 PM Adrian Klaver 
wrote:

> Hmm, are there triggers on wss_entries that are UPDATEing/DELETEing
> entries elsewhere?
>

No, that table is pretty much stand-alone.  What we're seeing here is most
likely caused by the initial copy of the legacy db tables.

This happens when creating the local test db which I do right before the
tests.  I can avoid the issue if I monitor top and wait for the analyze to
complete on all the related tables.

corey


Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Adrian Klaver

On 9/23/19 2:34 PM, Corey Taylor wrote:
On Mon, Sep 23, 2019 at 4:31 PM Adrian Klaver > wrote:


Hmm, are there triggers on wss_entries that are UPDATEing/DELETEing
entries elsewhere?


No, that table is pretty much stand-alone.  What we're seeing here is 
most likely caused by the initial copy of the legacy db tables.


This happens when creating the local test db which I do right before the 
tests.  I can avoid the issue if I monitor top and wait for the analyze 
to complete on all the related tables.


Big hammer approach:

alter system set autovacuum = off;
SELECT pg_reload_conf();

Do work

alter system reset autovacuum;

SELECT pg_reload_conf();

Smaller hammer:

https://www.postgresql.org/docs/11/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS

"autovacuum_enabled, toast.autovacuum_enabled (boolean)"

which can be done by ALTER TABLE also:

https://www.postgresql.org/docs/11/sql-altertable.html



corey



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
On Mon, Sep 23, 2019 at 4:50 PM Adrian Klaver 
wrote:

> Smaller hammer:
>
>
> https://www.postgresql.org/docs/11/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS
>
> "autovacuum_enabled, toast.autovacuum_enabled (boolean)"
>
> which can be done by ALTER TABLE also:
>
> https://www.postgresql.org/docs/11/sql-altertable.html



Thanks for the feedback.  I think we'll be able to work around this issue
in a way that is reliable enough to script.

I am curious if this is a "known" issue with restoring tables and analyze
running at some point after while performing a select on the tables.  It
would be kind of nice to know when to predict this kind of situation.

corey


Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Adrian Klaver

On 9/23/19 3:43 PM, Corey Taylor wrote:
On Mon, Sep 23, 2019 at 4:50 PM Adrian Klaver > wrote:


Smaller hammer:


https://www.postgresql.org/docs/11/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS

"autovacuum_enabled, toast.autovacuum_enabled (boolean)"

which can be done by ALTER TABLE also:

https://www.postgresql.org/docs/11/sql-altertable.html



Thanks for the feedback.  I think we'll be able to work around this 
issue in a way that is reliable enough to script.


I am curious if this is a "known" issue with restoring tables and 
analyze running at some point after while performing a select on the 
tables.  It would be kind of nice to know when to predict this kind of 
situation.


Usually what is seen here is the opposite, that tables are restored and 
ANALYZE is not run and performance on the subsequent queries is poor due 
to lack of current statistics.


What is the restore process?



corey



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
On Mon, Sep 23, 2019 at 5:51 PM Adrian Klaver 
wrote:

> Usually what is seen here is the opposite, that tables are restored and
> ANALYZE is not run and performance on the subsequent queries is poor due
> to lack of current statistics.
>
> What is the restore process?
>

For these specific legacy db tables, they are isolated in a separate
schema.  We then use pg_restore to restore the entire schema.  Essentially
just:

pg_restore -n wss --no-owner

corey


Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Adrian Klaver

On 9/23/19 3:56 PM, Corey Taylor wrote:
On Mon, Sep 23, 2019 at 5:51 PM Adrian Klaver > wrote:


Usually what is seen here is the opposite, that tables are restored and
ANALYZE is not run and performance on the subsequent queries is poor
due
to lack of current statistics.

What is the restore process?


For these specific legacy db tables, they are isolated in a separate 
schema.  We then use pg_restore to restore the entire schema.  
Essentially just:


pg_restore -n wss --no-owner


Per my previous post and below, the above does not kick off an ANALYZE:

https://www.postgresql.org/docs/11/app-pgrestore.html

"Once restored, it is wise to run ANALYZE on each restored table so the 
optimizer has useful statistics; see Section 24.1.3 and Section 24.1.6 
for more information."


So is there some other step in the process that occurs after the restore 
and before you run your function?




corey



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Corey Taylor
On Mon, Sep 23, 2019 at 7:23 PM Adrian Klaver 
wrote:

> "Once restored, it is wise to run ANALYZE on each restored table so the
> optimizer has useful statistics; see Section 24.1.3 and Section 24.1.6
> for more information."
>
> So is there some other step in the process that occurs after the restore
> and before you run your function?
>

There are several other restore called and a delete query that clears out
an unrelated table.

However, I think this solves the issue for us and the mystery. The ANALYZE
was missing which reduces an unending query down to a minute.  The ANALZYE
runs very quickly on its own so we're simply going to fix the issue by
following documented advice.

I guess the length of the query when before/during the ANALZYE felt like
something more was wrong.

corey


Re: postgres 9.6: insert into select finishes only in pgadmin not psql

2019-09-23 Thread Adrian Klaver

On 9/23/19 5:28 PM, Corey Taylor wrote:
On Mon, Sep 23, 2019 at 7:23 PM Adrian Klaver > wrote:


"Once restored, it is wise to run ANALYZE on each restored table so the
optimizer has useful statistics; see Section 24.1.3 and Section 24.1.6
for more information."

So is there some other step in the process that occurs after the
restore
and before you run your function?


There are several other restore called and a delete query that clears 
out an unrelated table.


However, I think this solves the issue for us and the mystery. The 
ANALYZE was missing which reduces an unending query down to a minute.  
The ANALZYE runs very quickly on its own so we're simply going to fix 
the issue by following documented advice.


Aah, so it was the common case. Order is restored to the Postgres Universe:)



I guess the length of the query when before/during the ANALZYE felt like 
something more was wrong.


corey



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to get timezone offset in timestamp with time zone AT TIME ZONE output.

2019-09-23 Thread Andrew Gierth
> "Adrian" == Adrian Klaver  writes:

 Adrian> This has come up before and the general suggestion has been to
 Adrian> have a column for a naive(timestamp w/o tz) timestamp and a
 Adrian> column for the timezone.

No, that's usually (not always) backwards, and in any event wouldn't
solve this particular issue.

-- 
Andrew (irc:RhodiumToad)




Re: Autovacuum lock conflict

2019-09-23 Thread Laurenz Albe
On Mon, 2019-09-23 at 13:53 -0600, Michael Lewis wrote:
> > - For tables that receive only INSERTs, schedule a regular VACUUM
> >   with "cron" or similar.  Unfortunately, PostgreSQL isn't very
> > smart
> >   about vacuuming insert-only tables.
> 
> 
> What is the need to vacuum on an insert only table? Does that just
> maintain the freespace map?

No, the idea is to get rows frozen *before* autovacuum launches
an anti-wraparound vacuum.
You could launch VACUUM (FREEZE) for best effect.

This way you can schedule the operation so that it doesn't
interfere with your other activities.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com