Exclude logging certain connections?

2020-03-15 Thread Paul Förster
Hi,

is there a way to exclude certain connections, for example recurring monitoring 
connections, from the postgres.log?

I found this from in old post on stackexchange.com:

postgres=# create role mon login;
CREATE ROLE
postgres=# alter role mon set log_connections=off;
ERROR:  parameter "log_connections" cannot be set after connection start

(source: 
https://dba.stackexchange.com/questions/118018/is-it-possible-to-exclude-specific-users-in-log-activity-of-postgresql)

There is no connection at the time. So why do I get the error? And what can I 
do about it?

Cheers,
Paul



Re: Exclude logging certain connections?

2020-03-15 Thread Adrian Klaver

On 3/15/20 3:09 AM, Paul Förster wrote:

Hi,

is there a way to exclude certain connections, for example recurring monitoring 
connections, from the postgres.log?

I found this from in old post on stackexchange.com:

postgres=# create role mon login;
CREATE ROLE
postgres=# alter role mon set log_connections=off;
ERROR:  parameter "log_connections" cannot be set after connection start

(source: 
https://dba.stackexchange.com/questions/118018/is-it-possible-to-exclude-specific-users-in-log-activity-of-postgresql)

There is no connection at the time. So why do I get the error? And what can I 
do about it?


Nothing from what I see:

From here:

https://www.postgresql.org/docs/12/config-setting.html#id-1.6.6.4.5

env PGOPTIONS="-c log_connections=off" psql -d test -U aklaver

psql: error: could not connect to server: FATAL:  permission denied to 
set parameter "log_connections"


First problem is you need to be superuser. Alright so be superuser:

date
Sun Mar 15 09:24:20 PDT 2020

aklaver@maura:~> env PGOPTIONS="-c log_connections=off" psql -d test -U 
postgres


psql (12.1)
Type "help" for help.

test=# \x
Expanded display is on.
test=# select * from pg_settings where name = 'log_connections';
-[ RECORD 1 ]---+
name| log_connections
setting | off
unit|
category| Reporting and Logging / What to Log
short_desc  | Logs each successful connection.
extra_desc  |
context | superuser-backend
vartype | bool
source  | client
min_val |
max_val |
enumvals|
boot_val| off
reset_val   | off
sourcefile  |
sourceline  |
pending_restart | f


The connection is still recorded:

[unknown]-[unknown]-2020-03-15 09:24:23.460 PDT-0LOG:  connection 
received: host=[local]
[unknown]-postgres-2020-03-15 09:24:23.460 PDT-0LOG:  connection 
authorized: user=postgres database=test application_name=psql



To me it looks like log_connections is all or none.




Cheers,
Paul




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Exclude logging certain connections?

2020-03-15 Thread Paul Förster
Hi Adrian,

> Nothing from what I see:
> 
> From here:
> 
> https://www.postgresql.org/docs/12/config-setting.html#id-1.6.6.4.5
> 
> env PGOPTIONS="-c log_connections=off" psql -d test -U aklaver
> 
> psql: error: could not connect to server: FATAL:  permission denied to set 
> parameter "log_connections"
> 
> First problem is you need to be superuser. Alright so be superuser:
> 
> date
> Sun Mar 15 09:24:20 PDT 2020
> 
> aklaver@maura:~> env PGOPTIONS="-c log_connections=off" psql -d test -U 
> postgres
> 
> psql (12.1)
> Type "help" for help.
> 
> test=# \x
> Expanded display is on.
> test=# select * from pg_settings where name = 'log_connections';
> -[ RECORD 1 ]---+
> name| log_connections
> setting | off
> unit|
> category| Reporting and Logging / What to Log
> short_desc  | Logs each successful connection.
> extra_desc  |
> context | superuser-backend
> vartype | bool
> source  | client
> min_val |
> max_val |
> enumvals|
> boot_val| off
> reset_val   | off
> sourcefile  |
> sourceline  |
> pending_restart | f
> 
> 
> The connection is still recorded:
> 
> [unknown]-[unknown]-2020-03-15 09:24:23.460 PDT-0LOG:  connection received: 
> host=[local]
> [unknown]-postgres-2020-03-15 09:24:23.460 PDT-0LOG:  connection authorized: 
> user=postgres database=test application_name=psql
> 
> 
> To me it looks like log_connections is all or none.

that's what I suspected. This is also what I found out so far. This is ugly 
because unwanted monitoring connections like those of pgwatch2 keep spamming 
the logfiles this way. It would be great if there was an option to specify a 
list of users whose connections would not be logged, even with logging 
connect/disconnect enabled.

Something like this:

log_exclude_user_connect = 'pgwatch2,myself,...'

in postgresql.conf

Thanks very much.

Cheers,
Paul



unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Karsten Hilbert
Dear community,

we are seeing the below error on PG 9.6.16 on Debian:

(different issue from the psycopg2 one recently posted by me)

> /home/marc# pg_dump -p 5432 --username=gm-dbo --dbname=gnumed_v22 
> --compress=0 --no-sync --format=custom --file=/dev/null
> pg_dump: Ausgabe des Inhalts der Tabelle »doc_obj« fehlgeschlagen: 
> PQgetResult() fehlgeschlagen.
> pg_dump: Fehlermeldung vom Server: ERROR:  unexpected chunk number 2 
> (expected 0) for toast value 99027 in pg_toast_18536
> pg_dump: Die Anweisung war: COPY blobs.doc_obj (pk, fk_doc, seq_idx, comment, 
> fk_intended_reviewer, data, filename) TO stdout;

(to note: column "data" is of type BYTEA)

We have been able to identify the row (there may be more)
in blobs.doc_obj which leads to the above error.

blobs.doc_obj.pk -> 82224

We have ruled out (?) below-PG hardware problems by a
successful run of:

cp -rv —preserve=all /var/lib/postgresql/9.6  /tmp/

We then tried

gnumed_v22=# REINDEX TABLE pg_toast.pg_toast_18536;
REINDEX
gnumed_v22=# REINDEX TABLE blobs.doc_obj ;
REINDEX
gnumed_v22=# VACUUM ANALYZE pg_toast.pg_toast_18536;
VACUUM
gnumed_v22=# VACUUM FULL pg_toast.pg_toast_18536;
VACUUM
gnumed_v22=# VACUUM ANALYZE blobs.doc_obj ;
VACUUM
gnumed_v22=# VACUUM FULL blobs.doc_obj ;
ERROR:  unexpected chunk number 2 (expected 0) for toast value 99027 in 
pg_toast_18536

We then tried to DELETE the offending row

delete from blobs.doc_obj where pk = 82224;

but that, again, shows the "unexpected chunk" problem.

Now, what else can we try to address the problem short of
doing the

pg_dump --exclude-table-data=blobs.doc_obj

judicious use of COPY-FROM-with-subselect from blobs.doc_obj

restore

dance ?

Many thanks,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Karsten Hilbert
On Sun, Mar 15, 2020 at 07:23:49PM +0100, Karsten Hilbert wrote:

> We then tried to DELETE the offending row
>
>   delete from blobs.doc_obj where pk = 82224;
>
> but that, again, shows the "unexpected chunk" problem.

According to

http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html

an UPDATE of the row is recommended -- should that work
better than a DELETE ?

I can't find documentation pointing to a fundamental
implementation difference that suggests so.

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Adrian Klaver

On 3/15/20 12:20 PM, Karsten Hilbert wrote:

On Sun, Mar 15, 2020 at 07:23:49PM +0100, Karsten Hilbert wrote:


We then tried to DELETE the offending row

delete from blobs.doc_obj where pk = 82224;

but that, again, shows the "unexpected chunk" problem.


According to

http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html

an UPDATE of the row is recommended -- should that work
better than a DELETE ?

I can't find documentation pointing to a fundamental
implementation difference that suggests so.


https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK

"During an UPDATE operation, values of unchanged fields are normally 
preserved as-is; so an UPDATE of a row with out-of-line values incurs no 
TOAST costs if none of the out-of-line values change."






Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Karsten Hilbert
On Sun, Mar 15, 2020 at 12:58:53PM -0700, Adrian Klaver wrote:

> > > We then tried to DELETE the offending row
> > >
> > >   delete from blobs.doc_obj where pk = 82224;
> > >
> > > but that, again, shows the "unexpected chunk" problem.
> >
> > According to
> >
> > http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html
> >
> > an UPDATE of the row is recommended -- should that work
> > better than a DELETE ?
> >
> > I can't find documentation pointing to a fundamental
> > implementation difference that suggests so.
>
> https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK
>
> "During an UPDATE operation, values of unchanged fields are normally
> preserved as-is; so an UPDATE of a row with out-of-line values incurs no
> TOAST costs if none of the out-of-line values change."

However, where is the fault in my thinking ?

-> An UPDATE actually *would* change the TOASTed BYTEA field (which is corrupt).

I had hoped that the DELETE would NOT have to touch the TOAST
table at all (and thereby not check the chunks) as "all it
needs to do" is mark the row in the *primary* table as
not-needed-anymore.

I must be misunderstanding something.

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Jan Wieck
Have you tried to reindex the table? Toast internally forces an index scan,
so missing index tuples or an otherwise corrupted toast index would have
the same symptoms as toast chunks actually missing.


Regards, Jan

On Sun, Mar 15, 2020, 16:21 Karsten Hilbert  wrote:

> On Sun, Mar 15, 2020 at 12:58:53PM -0700, Adrian Klaver wrote:
>
> > > > We then tried to DELETE the offending row
> > > >
> > > >   delete from blobs.doc_obj where pk = 82224;
> > > >
> > > > but that, again, shows the "unexpected chunk" problem.
> > >
> > > According to
> > >
> > >
> http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html
> > >
> > > an UPDATE of the row is recommended -- should that work
> > > better than a DELETE ?
> > >
> > > I can't find documentation pointing to a fundamental
> > > implementation difference that suggests so.
> >
> >
> https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK
> >
> > "During an UPDATE operation, values of unchanged fields are normally
> > preserved as-is; so an UPDATE of a row with out-of-line values incurs no
> > TOAST costs if none of the out-of-line values change."
>
> However, where is the fault in my thinking ?
>
> -> An UPDATE actually *would* change the TOASTed BYTEA field (which is
> corrupt).
>
> I had hoped that the DELETE would NOT have to touch the TOAST
> table at all (and thereby not check the chunks) as "all it
> needs to do" is mark the row in the *primary* table as
> not-needed-anymore.
>
> I must be misunderstanding something.
>
> Karsten
> --
> GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B
>
>
>


Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Karsten Hilbert
On Sun, Mar 15, 2020 at 05:04:06PM -0400, Jan Wieck wrote:

> Have you tried to reindex the table? Toast internally forces an index scan,
> so missing index tuples or an otherwise corrupted toast index would have
> the same symptoms as toast chunks actually missing.

We sure did, but thanks for reminding.

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Order by and timestamp

2020-03-15 Thread Björn Lundin
Hi!
I have an old database that behaves a bit strange.
I keeps horse races in UK/IE.
I have a program that continuously* adds record into a market table ,
described as below.

*continuously means ’after each race’ which is ca 12:00 --> 23:00.

I then did ’select * from AMARKETS order by STARTTS’

And to my surprise i get a result like this  (note the order of column STARTTS)

  marketid   |marketname|   startts   | eventid 
….
….

 1.127253880 | To Be Placed | 2016-09-29 16:10:00 | 27951325 |
 1.127275624 | 1m4f Hcap| 2016-09-30 16:20:00 | 27953169 | 
 1.127275625 | To Be Placed | 2016-09-30 16:20:00 | 27953169 |
 1.127275629 | 1m2f Hcap| 2016-09-30 16:50:00 | 27953169 |
 1.127275634 | 1m2f Hcap| 2016-09-30 17:20:00 | 27953169 |
 1.127275635 | To Be Placed | 2016-09-30 17:20:00 | 27953169 |
 1.127275639 | 1m Nursery   | 2016-09-30 17:50:00 | 27953169 |
 1.127275640 | To Be Placed | 2016-09-30 17:50:00 | 27953169 |
 1.127275645 | To Be Placed | 2016-09-30 18:20:00 | 27953169 |
 1.127275649 | 6f Mdn Stks  | 2016-09-30 18:50:00 | 27953169 |
 1.127275650 | To Be Placed | 2016-09-30 18:50:00 | 27953169 |
 1.127275654 | 5f Hcap  | 2016-09-30 19:20:00 | 27953169 |
 1.127275655 | To Be Placed | 2016-09-30 19:20:00 | 27953169 |
 1.127275659 | 5f Hcap  | 2016-09-30 19:50:00 | 27953169 |
 1.127275660 | To Be Placed | 2016-09-30 19:50:00 | 27953169 |
 1.127275677 | 1m Mdn Stks  | 2016-10-01 12:45:00 | 27953172 |
 1.127275680 | To Be Placed | 2016-10-01 12:45:00 | 27953172 |
 1.127275684 | 6f Hcap  | 2016-10-01 13:15:00 | 27953172 |
 1.127275687 | To Be Placed | 2016-10-01 13:15:00 | 27953172 |
 1.127275691 | 1m Hcap  | 2016-10-01 13:50:00 | 27953172 |
 1.127275694 | To Be Placed | 2016-10-01 13:50:00 | 27953172 |
 1.127275698 | 1m2f Hcap| 2016-10-01 14:25:00 | 27953172 |
 1.127275701 | To Be Placed | 2016-10-01 14:25:00 | 27953172 |
 1.127275705 | 1m Grp1  | 2016-10-01 15:00:00 | 27953172 |
 1.127275708 | To Be Placed | 2016-10-01 15:00:00 | 27953172 |
 1.127275715 | To Be Placed | 2016-10-01 15:35:00 | 27953172 |
 1.127275722 | To Be Placed | 2016-10-01 16:10:00 | 27953172 |
 1.127278857 | 7f Hcap  | 2016-09-30 13:00:00 | 27953255 |
 1.127278858 | To Be Placed | 2016-09-30 13:00:00 | 27953255 |
 1.127278862 | 1m Class Stks| 2016-09-30 13:35:00 | 27953255 |
 1.127278863 | To Be Placed | 2016-09-30 13:35:00 | 27953255 |
 1.127278867 | 6f Hcap  | 2016-09-30 14:10:00 | 27953255 |
…
….

 1.130630452 | 2m INHF  | 2017-03-30 16:00:00 | 28172518 |
 1.130630453 | To Be Placed | 2017-03-30 16:00:00 | 28172518 |
 1.130645203 | 1m2f Mdn Stks| 2017-04-01 12:30:00 | 28173548 |
 1.130645204 | To Be Placed | 2017-04-01 12:30:00 | 28173548 |
 1.130645213 | 6f Hcap  | 2017-04-01 13:40:00 | 28173548 |
 1.130645214 | To Be Placed | 2017-04-01 13:40:00 | 28173548 |
 1.130645218 | 1m3f Hcap| 2017-04-01 14:15:00 | 28173548 |
 1.130645219 | To Be Placed | 2017-04-01 14:15:00 | 28173548 |
 1.130645223 | 7f Mdn Stks  | 2017-04-01 14:50:00 | 28173548 |
 1.130645224 | To Be Placed | 2017-04-01 14:50:00 | 28173548 |
 1.130645228 | 1m3f Hcap| 2017-04-01 15:25:00 | 28173548 |
 1.130645229 | To Be Placed | 2017-04-01 15:25:00 | 28173548 |
 1.130645233 | 2m Hcap  | 2017-04-01 16:00:00 | 28173548 |
 1.130645234 | To Be Placed | 2017-04-01 16:00:00 | 28173548 |
 1.130645400 | 2m3f Nov Hrd | 2017-03-31 13:10:00 | 28173582 |
 1.130645401 | To Be Placed | 2017-03-31 13:10:00 | 28173582 |
 1.130645405 | 2m5f Hcap Chs| 2017-03-31 13:40:00 | 28173582 |
 1.130645415 | 2m1f Hcap Chs| 2017-03-31 14:40:00 | 28173582 |
 1.130645416 | To Be Placed | 2017-03-31 14:40:00 | 28173582 |
 1.130645420 | 2m5f Hcap Hrd| 2017-03-31 15:10:00 | 28173582 |
 1.130645421 | To Be Placed | 2017-03-31 15:10:00 | 28173582 |
 1.130645425 | 2m3f Hcap Chs| 2017-03-31 15:40:00 | 28173582 |
 1.130645426 | To Be Placed | 2017-03-31 15:40:00 | 28173582 |
 1.130645430 | 1m5f Stks NHF| 2017-03-31 16:10:00 | 28173582 |
 1.130645431 | To Be Placed | 2017-03-31 16:10:00 | 28173582 |
 1.130645436 | 1m4f Hcap| 2017-03-31 16:45:00 | 28173583 |
 1.130645437 | To Be Placed | 2017-03-31 16:45:00 | 28173583 |
 1.130645441 | 1m Hcap  | 2017-03-31 17:15:00 | 28173583 |
 1.130645442 | To Be Placed  

Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Adrian Klaver

On 3/15/20 1:21 PM, Karsten Hilbert wrote:

On Sun, Mar 15, 2020 at 12:58:53PM -0700, Adrian Klaver wrote:


We then tried to DELETE the offending row

delete from blobs.doc_obj where pk = 82224;

but that, again, shows the "unexpected chunk" problem.


According to

http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html

an UPDATE of the row is recommended -- should that work
better than a DELETE ?

I can't find documentation pointing to a fundamental
implementation difference that suggests so.


https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK

"During an UPDATE operation, values of unchanged fields are normally
preserved as-is; so an UPDATE of a row with out-of-line values incurs no
TOAST costs if none of the out-of-line values change."


However, where is the fault in my thinking ?

-> An UPDATE actually *would* change the TOASTed BYTEA field (which is corrupt).

I had hoped that the DELETE would NOT have to touch the TOAST
table at all (and thereby not check the chunks) as "all it
needs to do" is mark the row in the *primary* table as
not-needed-anymore.

I must be misunderstanding something.


Except it would also need to delete the toast entries as well.


Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Order by and timestamp

2020-03-15 Thread Steven Lembark
On Sun, 15 Mar 2020 22:33:35 +0100:wq
Björn Lundin  wrote:

> And to my surprise i get a result like this  (note the order of
> column STARTTS)

(1) Suggest using "pastebin.com" for this kind of data. It may not
look very pretty -- or readable at all -- on the viewer's end
depending on their settings (see below for example).

(2) I think you are refering to one section where the date goes
from 2016-10-01 to 2016-09-30; suggest describing the 
transition in your text and flag the rows with '*' or 
something similar. 

 | 2016-10-01 15:35:00 |
 | 2016-10-01 16:10:00 |
   * | 2016-09-30 13:00:00 | 
   * | 2016-09-30 13:00:00 | 

(3) "Old database" might mean anyting. Provide the PG version 
it was created in and the one you are using along with the
result of "\d+" in the current database. 

(4) Classic causes of this are a botched index. Depending on the 
size you might just want to either drop and re-add the 
indexes or export and reload the table (e.g., \copy to ...
+ truncate + \copy from ...). The point there would be 
fully rebuilding the table and index structure. 

If that doesn't work perhaps drop and re-add the table with
whatever version of PG you are using and then \copy the data
back in using the current version.

(5) If you've tried any of the above then bloody well describe it
(along with any migration steps taken) in the message so you
don't have to re-read what you've already done :-)

(6) Don't gamble on horses, play the stock market instead: It 
sounds fancier and you can loose much more money much more
quickly...  er... yeah.


What this looks like on my end. Feel free to try and make sense
of it yourself.
 
>   marketid   |marketname|   startts   |
> eventid ….
> ….
> 
>  1.127253880 | To Be Placed | 2016-09-29 16:10:00 |
> 27951325 | 1.127275624 | 1m4f Hcap| 2016-09-30
> 16:20:00 | 27953169 | 1.127275625 | To Be Placed |
> 2016-09-30 16:20:00 | 27953169 | 1.127275629 | 1m2f
> Hcap| 2016-09-30 16:50:00 | 27953169 | 1.127275634 |
> 1m2f Hcap| 2016-09-30 17:20:00 | 27953169 |
> 1.127275635 | To Be Placed | 2016-09-30 17:20:00 |
> 27953169 | 1.127275639 | 1m Nursery   | 2016-09-30
> 17:50:00 | 27953169 | 1.127275640 | To Be Placed |
> 2016-09-30 17:50:00 | 27953169 | 1.127275645 | To Be
> Placed | 2016-09-30 18:20:00 | 27953169 | 1.127275649 |
> 6f Mdn Stks  | 2016-09-30 18:50:00 | 27953169 |
> 1.127275650 | To Be Placed | 2016-09-30 18:50:00 |
> 27953169 | 1.127275654 | 5f Hcap  | 2016-09-30
> 19:20:00 | 27953169 | 1.127275655 | To Be Placed |
> 2016-09-30 19:20:00 | 27953169 | 1.127275659 | 5f
> Hcap  | 2016-09-30 19:50:00 | 27953169 | 1.127275660
> | To Be Placed | 2016-09-30 19:50:00 | 27953169 |
> 1.127275677 | 1m Mdn Stks  | 2016-10-01 12:45:00 |
> 27953172 | 1.127275680 | To Be Placed | 2016-10-01
> 12:45:00 | 27953172 | 1.127275684 | 6f Hcap  |
> 2016-10-01 13:15:00 | 27953172 | 1.127275687 | To Be
> Placed | 2016-10-01 13:15:00 | 27953172 | 1.127275691 |
> 1m Hcap  | 2016-10-01 13:50:00 | 27953172 |
> 1.127275694 | To Be Placed | 2016-10-01 13:50:00 |
> 27953172 | 1.127275698 | 1m2f Hcap| 2016-10-01
> 14:25:00 | 27953172 | 1.127275701 | To Be Placed |
> 2016-10-01 14:25:00 | 27953172 | 1.127275705 | 1m
> Grp1  | 2016-10-01 15:00:00 | 27953172 | 1.127275708
> | To Be Placed | 2016-10-01 15:00:00 | 27953172 |
> 1.127275715 | To Be Placed | 2016-10-01 15:35:00 |
> 27953172 | 1.127275722 | To Be Placed | 2016-10-01
> 16:10:00 | 27953172 | 1.127278857 | 7f Hcap  |
> 2016-09-30 13:00:00 | 27953255 | 1.127278858 | To Be
> Placed | 2016-09-30 13:00:00 | 27953255 | 1.127278862 |
> 1m Class Stks| 2016-09-30 13:35:00 | 27953255 |
> 1.127278863 | To Be Placed | 2016-09-30 13:35:00 |
> 27953255 | 1.127278867 | 6f Hcap  | 2016-09-30
> 14:10:00 | 27953255 | … ….
> 
>  1.130630452 | 2m INHF  | 2017-03-30 16:00:00 |
> 28172518 | 1.130630453 | To Be Placed | 2017-03-30
> 16:00:00 | 28172518 | 1.130645203 | 1m2f Mdn Stks|
> 2017-04-01 12:30:00 | 28173548 | 1.130645204 | To Be
> Placed | 2017-04-01 12:30:00 | 28173548 | 1.130645213 |
> 6f Hcap  | 2017-04-01 13:40:00 | 28173548 |
> 1.130645214 | To Be Placed | 2017-04-01 13:40:00 |
> 28173548 | 1.130645218 | 1m3f Hcap| 2017-04-01
> 14:15:00 | 28173548 | 1.130645219 | To Be Placed |
> 2017-04-01 14:15:00 | 28173548 | 1.130645223 | 7f Mdn
> Stks  | 2017-04-01 14:50:00 | 28173548 | 1.130645224 | To
> B

Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Karsten Hilbert
On Sun, Mar 15, 2020 at 02:35:39PM -0700, Adrian Klaver wrote:

> On 3/15/20 1:21 PM, Karsten Hilbert wrote:
> > On Sun, Mar 15, 2020 at 12:58:53PM -0700, Adrian Klaver wrote:
> >
> > > > > We then tried to DELETE the offending row
> > > > >
> > > > >   delete from blobs.doc_obj where pk = 82224;
> > > > >
> > > > > but that, again, shows the "unexpected chunk" problem.
> > > >
> > > > According to
> > > >
> > > > 
> > > > http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html
> > > >
> > > > an UPDATE of the row is recommended -- should that work
> > > > better than a DELETE ?
> > > >
> > > > I can't find documentation pointing to a fundamental
> > > > implementation difference that suggests so.
> > >
> > > https://www.postgresql.org/docs/12/storage-toast.html#STORAGE-TOAST-ONDISK
> > >
> > > "During an UPDATE operation, values of unchanged fields are normally
> > > preserved as-is; so an UPDATE of a row with out-of-line values incurs no
> > > TOAST costs if none of the out-of-line values change."
> >
> > However, where is the fault in my thinking ?
> >
> > -> An UPDATE actually *would* change the TOASTed BYTEA field (which is 
> > corrupt).
> >
> > I had hoped that the DELETE would NOT have to touch the TOAST
> > table at all (and thereby not check the chunks) as "all it
> > needs to do" is mark the row in the *primary* table as
> > not-needed-anymore.
> >
> > I must be misunderstanding something.
>
> Except it would also need to delete the toast entries as well.

OK, got that. What I now don't understand is how the UPDATE
won't have to touch the TOAST table when the TOASTed value
*is* UPDATEd:

update blobs.doc_obj set data = '' where pk = the_faulty_row;

(data is the BYTEA column)

Slightly confused :-)
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: unexpected chunk number 2 (expected 0) for toast value ... in pg_toast_18536

2020-03-15 Thread Tom Lane
Karsten Hilbert  writes:
>>> According to
>>> http://www.databasesoup.com/2013/10/de-corrupting-toast-tables.html
>>> an UPDATE of the row is recommended -- should that work
>>> better than a DELETE ?

> OK, got that. What I now don't understand is how the UPDATE
> won't have to touch the TOAST table when the TOASTed value
> *is* UPDATEd:
>   update blobs.doc_obj set data = '' where pk = the_faulty_row;
> (data is the BYTEA column)

It makes no sense to me either; I wonder if Josh's recipe ever
really worked?  But it's clearly not working now, and that's
what I'd expect, because any mechanism for removing the busted
toast reference is going to cause the system to try to mark
the toast rows deleted.

Since you reindexed the toast table and it still doesn't find
the missing chunks, I think the easiest "fix" would be to manually
insert rows with the correct chunk_id and chunk_seq, and ideally
with chunk_data of the appropriate length.  Then deletion of the
reference should work.

Unfortunately, it seems like you can't do that either, short of
hacking up the backend or writing some custom C code, because the
executor won't let you open a toast table as result relation :-(.
I wonder if we should change it to allow that when
allow_system_table_mods is true?  This isn't the first time we've
seen people need to be able to do surgery on a toast table.

regards, tom lane




Re: Order by and timestamp

2020-03-15 Thread Adrian Klaver

On 3/15/20 2:48 PM, Steven Lembark wrote:

On Sun, 15 Mar 2020 22:33:35 +0100:wq
Björn Lundin  wrote:


And to my surprise i get a result like this  (note the order of
column STARTTS)


(1) Suggest using "pastebin.com" for this kind of data. It may not
 look very pretty -- or readable at all -- on the viewer's end
 depending on their settings (see below for example).

(2) I think you are refering to one section where the date goes
 from 2016-10-01 to 2016-09-30; suggest describing the
 transition in your text and flag the rows with '*' or
 something similar.

  | 2016-10-01 15:35:00 |
  | 2016-10-01 16:10:00 |
* | 2016-09-30 13:00:00 |
* | 2016-09-30 13:00:00 |

(3) "Old database" might mean anyting. Provide the PG version
 it was created in and the one you are using along with the
 result of "\d+" in the current database.


That was at the bottom of the post. Version 9.6.10 and a \d for amarkets.



(4) Classic causes of this are a botched index. Depending on the
 size you might just want to either drop and re-add the
 indexes or export and reload the table (e.g., \copy to ...
 + truncate + \copy from ...). The point there would be
 fully rebuilding the table and index structure.

 If that doesn't work perhaps drop and re-add the table with
 whatever version of PG you are using and then \copy the data
 back in using the current version.

(5) If you've tried any of the above then bloody well describe it
 (along with any migration steps taken) in the message so you
 don't have to re-read what you've already done :-)

(6) Don't gamble on horses, play the stock market instead: It
 sounds fancier and you can loose much more money much more
 quickly...  er... yeah.


What this looks like on my end. Feel free to try and make sense
of it yourself.
  



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Order by and timestamp

2020-03-15 Thread Adrian Klaver

On 3/15/20 2:33 PM, Björn Lundin wrote:

Hi!
I have an old database that behaves a bit strange.
I keeps horse races in UK/IE.
I have a program that continuously* adds record into a market table ,
described as below.

*continuously means ’after each race’ which is ca 12:00 --> 23:00.

I then did ’select * from AMARKETS order by STARTTS’


Is amarkets in more then one schema?

If so what is search_path?

I could not replicate the below.

What does below show?:

select '2016-09-30 13:00:00'::timestamp at time zone 'UTC';

select '2016-10-01 15:35:00'::timestamp at time zone 'UTC';





And to my surprise i get a result like this  (note the order of column 
STARTTS)


   marketid   |        marketname        |       startts       | eventid
….
….




  1.127275701 | To Be Placed             | 2016-10-01 14:25:00 | 27953172 |
  1.127275705 | 1m Grp1                  | 2016-10-01 15:00:00 | 27953172 |
  1.127275708 | To Be Placed             | 2016-10-01 15:00:00 | 27953172 |
  1.127275715 | To Be Placed             | 2016-10-01 15:35:00 | 27953172 |
  1.127275722 | To Be Placed             | 2016-10-01 16:10:00 | 27953172 |
  1.127278857 | 7f Hcap                  | 2016-09-30 13:00:00 | 27953255 |
  1.127278858 | To Be Placed             | 2016-09-30 13:00:00 | 27953255 |
  1.127278862 | 1m Class Stks            | 2016-09-30 13:35:00 | 27953255 |
  1.127278863 | To Be Placed             | 2016-09-30 13:35:00 | 27953255 |
  1.127278867 | 6f Hcap                  | 2016-09-30 14:10:00 | 27953255 |
…
….





regards
--
Björn Lundin
b.f.lun...@gmail.com 






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Order by and timestamp

2020-03-15 Thread Tom Lane
Adrian Klaver  writes:
> On 3/15/20 2:33 PM, Björn Lundin wrote:
>> I then did ’select * from AMARKETS order by STARTTS’

> Is amarkets in more then one schema?

Yeah, it's hard to think of any explanation other than "the query used a
corrupt index on startts to produce the ordering".  But your \d doesn't
show any index on startts.  So maybe there's more than one amarkets
table?

Another possibly-useful bit of evidence is to see what EXPLAIN shows as
the query plan for this query.

regards, tom lane




Invalid byte sequence errors on DB restore

2020-03-15 Thread Samuel Smith
My current DB backup routine is just to call pg_dump and pipe to gzip. 
We recently started to get a failure to restore (which is basically just 
using psql -f on the pg_dump file) with the following errors:


invalid byte sequence for encoding "UTF8": 0xa0
 and
invalid byte sequence for encoding "UTF8": 0xd7 0x20


This is on a pg 9.2.24 instance. Any tips to troubleshoot?

Regards,
Samuel Smith