Re: Should I reinstall over current installation?

2020-02-04 Thread Moreno Andreo

  
  

  Il 04/02/2020 00:16, Chris Charley ha scritto:


  
  
I tried items you suggested (1-5), but could find no
  helpful info.


Thanks for your help and going the extra mile!
  
  


Hope I'm in time to try to save you from reinstall :-)

How to check if PostgreSQL is running
---
- Run Services.msc
- In the console you should have an entry with your postgres version
(I have 9.1 and 9.5 on this host)
- Check it's automatically started and running (It's in Italian, "In
esecuzione" means "running" and "Automatico", well :-)).
- If it's not, try to start it (right click on the service and
select start)
- If you get an error, check the error message




How to check the error message
---
You have 2 choices

1 - Postgres Logs)
of all, you should check PostgreSQL logs under \data\pg_log. Here you should find some files showing you
most of the errors you would find in Event Viewer.
Check error messages and, if still needed, post them

2 - Windows Event Log)

- Enter Event Viewer
- Sort Events by date descending
- First entries should reveal why your postgres server is not
running
(I captured the first error I got on this host just for example)
- On the lower pane, check what's the error message (in my case,
"FATAL: The database system is starting up").


If you need further help, post the error message and we'll try to
help you.


Hope this help
Moreno.-


  



Re: Postgres Crashing

2020-02-04 Thread Doug Roberts
> So how did containers_reset_recirc() come to clash with
> containers_add_update()?

They are clashing because another portion of our system is running and
updating containers. The reset recirc function was run at the same time to
see how our system and the database would handle it.

The recirc string is formatted like 2000=3,1000=6,5000=0. So the reset
recirc function with take a UID (1000 for example) and use that to remove
1000=x from all of the recirc counts for all of the containers that have
1000=x.

We are currently using PG 12.0.

Thanks,

Doug

On Mon, Feb 3, 2020 at 6:21 PM Tom Lane  wrote:

> Adrian Klaver  writes:
> > Please reply to list also.
>
> > On 2/3/20 2:18 PM, Doug Roberts wrote:
> >> Here is what the reset recirc function is doing.
> >> ...
> >> UPDATE containers
> >> ...
>
> > So how did containers_reset_recirc() come to clash with
> > containers_add_update()?
>
> If this is PG 12.0 or 12.1, a likely theory is that this is an
> EvalPlanQual bug (which'd be triggered during concurrent updates
> of the same row in the table, so that squares with the observation
> that locking the table prevents it).  The known bugs in that area
> require either before-row-update triggers on the table, or
> child tables (either partitioning or traditional inheritance).
> So I wonder what the schema of table "containers" looks like.
>
> Or you could have hit some new bug ... but there's not enough
> info here to diagnose.
>
> regards, tom lane
>


Re: Postgres Crashing

2020-02-04 Thread Doug Roberts
Hello,

Here is a stacktrace of what happened before and after the crash.

Thanks,

Doug

2020-02-04 10:26:16.841 EST [20788] [0] LOG:  0: server process (PID
12168) was terminated by exception 0xC005
2020-02-04 10:26:16.841 EST [20788] [0] DETAIL:  Failed process was
running: select CONTAINERS_RESET_RECIRC_BY_DP(3000)
2020-02-04 10:26:16.841 EST [20788] [0] HINT:  See C include file
"ntstatus.h" for a description of the hexadecimal value.
2020-02-04 10:26:16.841 EST [20788] [0] LOCATION:  LogChildExit,
postmaster.c:3670
2020-02-04 10:26:16.841 EST [20788] [0] LOG:  0: terminating any other
active server processes
2020-02-04 10:26:16.841 EST [20788] [0] LOCATION:  HandleChildCrash,
postmaster.c:3400
2020-02-04 10:26:16.873 EST [1212] [0] WARNING:  57P02: terminating
connection because of crash of another server process
2020-02-04 10:26:16.873 EST [1212] [0] DETAIL:  The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2020-02-04 10:26:16.873 EST [1212] [0] HINT:  In a moment you should be
able to reconnect to the database and repeat your command.
2020-02-04 10:26:16.873 EST [1212] [0] LOCATION:  quickdie, postgres.c:2717
2020-02-04 10:26:16.873 EST [19436] [0] WARNING:  57P02: terminating
connection because of crash of another server process
2020-02-04 10:26:16.873 EST [19436] [0] DETAIL:  The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2020-02-04 10:26:16.873 EST [19436] [0] HINT:  In a moment you should be
able to reconnect to the database and repeat your command.
2020-02-04 10:26:16.873 EST [19436] [0] LOCATION:  quickdie, postgres.c:2717
2020-02-04 10:26:16.874 EST [13428] [0] WARNING:  57P02: terminating
connection because of crash of another server process
2020-02-04 10:26:16.874 EST [13428] [0] DETAIL:  The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2020-02-04 10:26:16.874 EST [13428] [0] HINT:  In a moment you should be
able to reconnect to the database and repeat your command.
2020-02-04 10:26:16.874 EST [13428] [0] CONTEXT:  while locking tuple
(0,115) in relation "containers"
SQL statement "UPDATE containers
   SET type_uid = COALESCE(declared_type_uid, type_uid),
   carton_type_uid = COALESCE(declared_carton_type_uid,
carton_type_uid),
   status_uid = COALESCE(declared_status_uid, status_uid),
   order_uid = COALESCE(in_order_uid, order_uid),
   wave_uid = COALESCE(in_wave_uid, wave_uid),
   length = COALESCE(in_length, carton_length, length),
   width = COALESCE(in_width, carton_width, width),
   height = COALESCE(in_height, carton_height, height),
   weight = COALESCE(in_weight, weight),
   weight_minimum = COALESCE(in_weight_minimum, weight_minimum),
   weight_maximum = COALESCE(in_weight_maximum, weight_maximum),
   weight_expected = COALESCE(in_weight_expected,
weight_expected),
   first_seen_DP_id = COALESCE(first_seen_DP_id,
in_last_seen_DP_id),
   first_seen_datetime = COALESCE(first_seen_datetime,
last_seen_date_time),
   last_seen_DP_id = COALESCE(in_last_seen_DP_id,
last_seen_DP_id),
   last_seen_datetime = COALESCE(last_seen_date_time,
last_seen_datetime),
   recirculation_count = COALESCE(in_recirculation_count,
recirculation_count),
   project_flags = COALESCE(in_project_flags, project_flags),
   passed_weight_check = COALESCE(in_passed_weight_check,
passed_weight_check)
   WHERE uid = in_uid"
PL/pgSQL function
containers_add_update(integer,integer,integer,integer,integer,integer,double
precision,double precision,double precision,double precision,double
precision,double precision,double precision,integer,timestamp without time
zone,character varying,bigint,boolean) line 60 at SQL statement
2020-02-04 10:26:16.874 EST [13428] [0] LOCATION:  quickdie, postgres.c:2717
2020-02-04 10:26:16.874 EST [25916] [0] WARNING:  57P02: terminating
connection because of crash of another server process
2020-02-04 10:26:16.874 EST [25916] [0] DETAIL:  The postmaster has
commanded this server process to roll back the current transaction and
exit, because another server process exited abnormally and possibly
corrupted shared memory.
2020-02-04 10:26:16.874 EST [25916] [0] HINT:  In a moment you should be
able to reconnect to the database and repeat your command.
2020-02-04 10:26:16.874 EST [25916] [0] CONTEXT:  while locking tuple
(1,91) in relation "containers"
SQL statement "UPDATE containers
   SET type_uid = COALESCE(declared_type_uid, type_uid),
 

Re: Postgres Crashing

2020-02-04 Thread Adrian Klaver

On 2/4/20 8:06 AM, Doug Roberts wrote:

Hello,

Here is a stacktrace of what happened before and after the crash.


Actually the below is the Postgres log. Per Tom's previous post the 
procedure to get a stack trace can be found here:


https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend



Thanks,

Doug

2020-02-04 10:26:16.841 EST [20788] [0] LOG:  0: server process (PID 
12168) was terminated by exception 0xC005
2020-02-04 10:26:16.841 EST [20788] [0] DETAIL:  Failed process was 
running: select CONTAINERS_RESET_RECIRC_BY_DP(3000)
2020-02-04 10:26:16.841 EST [20788] [0] HINT:  See C include file 
"ntstatus.h" for a description of the hexadecimal value.
2020-02-04 10:26:16.841 EST [20788] [0] LOCATION:  LogChildExit, 
postmaster.c:3670
2020-02-04 10:26:16.841 EST [20788] [0] LOG:  0: terminating any 
other active server processes
2020-02-04 10:26:16.841 EST [20788] [0] LOCATION:  HandleChildCrash, 
postmaster.c:3400
2020-02-04 10:26:16.873 EST [1212] [0] WARNING:  57P02: terminating 
connection because of crash of another server process
2020-02-04 10:26:16.873 EST [1212] [0] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-04 10:26:16.873 EST [1212] [0] HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.

2020-02-04 10:26:16.873 EST [1212] [0] LOCATION:  quickdie, postgres.c:2717
2020-02-04 10:26:16.873 EST [19436] [0] WARNING:  57P02: terminating 
connection because of crash of another server process
2020-02-04 10:26:16.873 EST [19436] [0] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-04 10:26:16.873 EST [19436] [0] HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.

2020-02-04 10:26:16.873 EST [19436] [0] LOCATION:  quickdie, postgres.c:2717
2020-02-04 10:26:16.874 EST [13428] [0] WARNING:  57P02: terminating 
connection because of crash of another server process
2020-02-04 10:26:16.874 EST [13428] [0] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-04 10:26:16.874 EST [13428] [0] HINT:  In a moment you should be 
able to reconnect to the database and repeat your command.
2020-02-04 10:26:16.874 EST [13428] [0] CONTEXT:  while locking tuple 
(0,115) in relation "containers"

SQL statement "UPDATE containers
            SET type_uid = COALESCE(declared_type_uid, type_uid),
                carton_type_uid = COALESCE(declared_carton_type_uid, 
carton_type_uid),

                status_uid = COALESCE(declared_status_uid, status_uid),
                order_uid = COALESCE(in_order_uid, order_uid),
                wave_uid = COALESCE(in_wave_uid, wave_uid),
                length = COALESCE(in_length, carton_length, length),
                width = COALESCE(in_width, carton_width, width),
                height = COALESCE(in_height, carton_height, height),
                weight = COALESCE(in_weight, weight),
                weight_minimum = COALESCE(in_weight_minimum, 
weight_minimum),
                weight_maximum = COALESCE(in_weight_maximum, 
weight_maximum),
                weight_expected = COALESCE(in_weight_expected, 
weight_expected),
                first_seen_DP_id = COALESCE(first_seen_DP_id, 
in_last_seen_DP_id),
                first_seen_datetime = COALESCE(first_seen_datetime, 
last_seen_date_time),
                last_seen_DP_id = COALESCE(in_last_seen_DP_id, 
last_seen_DP_id),
                last_seen_datetime = COALESCE(last_seen_date_time, 
last_seen_datetime),
                recirculation_count = COALESCE(in_recirculation_count, 
recirculation_count),

                project_flags = COALESCE(in_project_flags, project_flags),
                passed_weight_check = COALESCE(in_passed_weight_check, 
passed_weight_check)

            WHERE uid = in_uid"
PL/pgSQL function 
containers_add_update(integer,integer,integer,integer,integer,integer,double 
precision,double precision,double precision,double precision,double 
precision,double precision,double precision,integer,timestamp without 
time zone,character varying,bigint,boolean) line 60 at SQL statement

2020-02-04 10:26:16.874 EST [13428] [0] LOCATION:  quickdie, postgres.c:2717
2020-02-04 10:26:16.874 EST [25916] [0] WARNING:  57P02: terminating 
connection because of crash of another server process
2020-02-04 10:26:16.874 EST [25916] [0] DETAIL:  The postmaster has 
commanded this server process to roll back the current transaction and 
exit, because another server process exited abnormally and possibly 
corrupted shared memory.
2020-02-04 10:26:16.874 ES

Re: Postgres Crashing

2020-02-04 Thread Doug Roberts
Sure. Ok then.

On Tue, Feb 4, 2020 at 11:18 AM Adrian Klaver 
wrote:

> On 2/4/20 8:06 AM, Doug Roberts wrote:
> > Hello,
> >
> > Here is a stacktrace of what happened before and after the crash.
>
> Actually the below is the Postgres log. Per Tom's previous post the
> procedure to get a stack trace can be found here:
>
>
> https://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend
>
> >
> > Thanks,
> >
> > Doug
> >
> > 2020-02-04 10:26:16.841 EST [20788] [0] LOG:  0: server process (PID
> > 12168) was terminated by exception 0xC005
> > 2020-02-04 10:26:16.841 EST [20788] [0] DETAIL:  Failed process was
> > running: select CONTAINERS_RESET_RECIRC_BY_DP(3000)
> > 2020-02-04 10:26:16.841 EST [20788] [0] HINT:  See C include file
> > "ntstatus.h" for a description of the hexadecimal value.
> > 2020-02-04 10:26:16.841 EST [20788] [0] LOCATION:  LogChildExit,
> > postmaster.c:3670
> > 2020-02-04 10:26:16.841 EST [20788] [0] LOG:  0: terminating any
> > other active server processes
> > 2020-02-04 10:26:16.841 EST [20788] [0] LOCATION:  HandleChildCrash,
> > postmaster.c:3400
> > 2020-02-04 10:26:16.873 EST [1212] [0] WARNING:  57P02: terminating
> > connection because of crash of another server process
> > 2020-02-04 10:26:16.873 EST [1212] [0] DETAIL:  The postmaster has
> > commanded this server process to roll back the current transaction and
> > exit, because another server process exited abnormally and possibly
> > corrupted shared memory.
> > 2020-02-04 10:26:16.873 EST [1212] [0] HINT:  In a moment you should be
> > able to reconnect to the database and repeat your command.
> > 2020-02-04 10:26:16.873 EST [1212] [0] LOCATION:  quickdie,
> postgres.c:2717
> > 2020-02-04 10:26:16.873 EST [19436] [0] WARNING:  57P02: terminating
> > connection because of crash of another server process
> > 2020-02-04 10:26:16.873 EST [19436] [0] DETAIL:  The postmaster has
> > commanded this server process to roll back the current transaction and
> > exit, because another server process exited abnormally and possibly
> > corrupted shared memory.
> > 2020-02-04 10:26:16.873 EST [19436] [0] HINT:  In a moment you should be
> > able to reconnect to the database and repeat your command.
> > 2020-02-04 10:26:16.873 EST [19436] [0] LOCATION:  quickdie,
> postgres.c:2717
> > 2020-02-04 10:26:16.874 EST [13428] [0] WARNING:  57P02: terminating
> > connection because of crash of another server process
> > 2020-02-04 10:26:16.874 EST [13428] [0] DETAIL:  The postmaster has
> > commanded this server process to roll back the current transaction and
> > exit, because another server process exited abnormally and possibly
> > corrupted shared memory.
> > 2020-02-04 10:26:16.874 EST [13428] [0] HINT:  In a moment you should be
> > able to reconnect to the database and repeat your command.
> > 2020-02-04 10:26:16.874 EST [13428] [0] CONTEXT:  while locking tuple
> > (0,115) in relation "containers"
> > SQL statement "UPDATE containers
> > SET type_uid = COALESCE(declared_type_uid, type_uid),
> > carton_type_uid = COALESCE(declared_carton_type_uid,
> > carton_type_uid),
> > status_uid = COALESCE(declared_status_uid, status_uid),
> > order_uid = COALESCE(in_order_uid, order_uid),
> > wave_uid = COALESCE(in_wave_uid, wave_uid),
> > length = COALESCE(in_length, carton_length, length),
> > width = COALESCE(in_width, carton_width, width),
> > height = COALESCE(in_height, carton_height, height),
> > weight = COALESCE(in_weight, weight),
> > weight_minimum = COALESCE(in_weight_minimum,
> > weight_minimum),
> > weight_maximum = COALESCE(in_weight_maximum,
> > weight_maximum),
> > weight_expected = COALESCE(in_weight_expected,
> > weight_expected),
> > first_seen_DP_id = COALESCE(first_seen_DP_id,
> > in_last_seen_DP_id),
> > first_seen_datetime = COALESCE(first_seen_datetime,
> > last_seen_date_time),
> > last_seen_DP_id = COALESCE(in_last_seen_DP_id,
> > last_seen_DP_id),
> > last_seen_datetime = COALESCE(last_seen_date_time,
> > last_seen_datetime),
> > recirculation_count = COALESCE(in_recirculation_count,
> > recirculation_count),
> > project_flags = COALESCE(in_project_flags,
> project_flags),
> > passed_weight_check = COALESCE(in_passed_weight_check,
> > passed_weight_check)
> > WHERE uid = in_uid"
> > PL/pgSQL function
> >
> containers_add_update(integer,integer,integer,integer,integer,integer,double
>
> > precision,double precision,double precision,double precision,double
> > precision,double precision,double precision,integer,timestamp without
> > time zone,character varying,bigint,boolean) line 60 at SQL statement
> > 2020-02-04 10:26:16.874 EST [13428] [0] LOCATION:  quickdie,
> postgres.c:2717
> 

Re: Postgres Crashing

2020-02-04 Thread Adrian Klaver

On 2/4/20 6:20 AM, Doug Roberts wrote:

So how did containers_reset_recirc() come to clash with
containers_add_update()?


They are clashing because another portion of our system is running and 
updating containers. The reset recirc function was run at the same time 
to see how our system and the database would handle it.


So does your system have the things Tom mentioned below?:

"The known bugs in that area
require either before-row-update triggers on the table, or
child tables (either partitioning or traditional inheritance).
So I wonder what the schema of table "containers" looks like."



The recirc string is formatted like 2000=3,1000=6,5000=0. So the reset 
recirc function with take a UID (1000 for example) and use that to 
remove 1000=x from all of the recirc counts for all of the containers 
that have 1000=x.


We are currently using PG 12.0.

Thanks,

Doug

On Mon, Feb 3, 2020 at 6:21 PM Tom Lane > wrote:


Adrian Klaver mailto:adrian.kla...@aklaver.com>> writes:
 > Please reply to list also.

 > On 2/3/20 2:18 PM, Doug Roberts wrote:
 >> Here is what the reset recirc function is doing.
 >> ...
 >>     UPDATE containers
 >> ...

 > So how did containers_reset_recirc() come to clash with
 > containers_add_update()?

If this is PG 12.0 or 12.1, a likely theory is that this is an
EvalPlanQual bug (which'd be triggered during concurrent updates
of the same row in the table, so that squares with the observation
that locking the table prevents it).  The known bugs in that area
require either before-row-update triggers on the table, or
child tables (either partitioning or traditional inheritance).
So I wonder what the schema of table "containers" looks like.

Or you could have hit some new bug ... but there's not enough
info here to diagnose.

                         regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




RE: Add column with default value in big table - splitting of updates can help?

2020-02-04 Thread Kevin Brannen
>From: Durumdara 
>
>a.)
>PG version is mainly 9.6, but some of the servers are 10.x or 11.x.
>b.)
>We have semi-automatic tool which get the a new modifications on databases, 
>and execute them at once by database.
>So one SQL script by one database, under one transaction - whole or nothing. 
>If something failed, we know where to we start again by hand. It is impossible 
>to execute only the first half, and we don't know which one executed or not.

Unless you have some special requirement, you don't have to do it all or
nothing. As Despez points out, you can do it in multiple transactions just
fine. We do it that way here all the time. :)

What it means is that you have to write guards or checks in your upgrade script.
In some instances, it's very easy because some statements have IF NOT EXISTS to
help you. For those places where doing the same statement twice would cause an
error, then put a check around it. A plpgsql DO block allows for conditions,
then only if the condition check fails, you do the work (the information_schema 
and
pg_catalog tables are your friend for this).

The point is that you can run your upgrade script as many times as needed, 
should
something happen and it stops. Rerunning the upgrade script should never
cause an error if you've coded it correctly.

>
>The main problem that sometimes we have to modify some tables which have too 
>much records in some customer databases.

I'm going to go with everyone else here because it works. We tend to do updates
in blocks of 10K records at a time. Do some tests and figure out what works best
for your setup (maybe you have enough memory to do 100K chunks). Whatever you
do, make sure that the column you use to divide the work has an index on it! Use
the primary key if it's an INT. If you have no index, find a column you can 
create
an index on for this work then drop it at the end; that will be far faster than 
having
to do a bunch of table scans.

Yes, it's probably a very good idea to upgrade to a newer version if you can
as performance improvements come with each new version.

HTH,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Postgres Crashing

2020-02-04 Thread Doug Roberts
Hello,

Hopefully the following stack trace is more helpful.

Exception thrown at 0x000140446403 in postgres.exe: 0xC005: Access
violation reading location 0xFFF8. occurred

> postgres.exe!pfree(void * pointer) Line 1033 C
  postgres.exe!tts_buffer_heap_clear(TupleTableSlot * slot) Line 653 C
  [Inline Frame] postgres.exe!ExecClearTuple(TupleTableSlot *) Line 428 C
  postgres.exe!ExecForceStoreHeapTuple(HeapTupleData * tuple,
TupleTableSlot * slot, bool shouldFree) Line 1448 C
  postgres.exe!ExecBRUpdateTriggers(EState * estate, EPQState * epqstate,
ResultRelInfo * relinfo, ItemPointerData * tupleid, HeapTupleData *
fdw_trigtuple, TupleTableSlot * newslot) Line 3117 C
  postgres.exe!ExecUpdate(ModifyTableState * mtstate, ItemPointerData *
tupleid, HeapTupleData * oldtuple, TupleTableSlot * slot, TupleTableSlot *
planSlot, EPQState * epqstate, EState * estate, bool canSetTag) Line 1072 C
  postgres.exe!ExecModifyTable(PlanState * pstate) Line 2223 C
  [Inline Frame] postgres.exe!ExecProcNode(PlanState *) Line 239 C
  postgres.exe!ExecutePlan(EState * estate, PlanState * planstate, bool
use_parallel_mode, CmdType operation, bool sendTuples, unsigned __int64
numberTuples, ScanDirection direction, _DestReceiver * dest, bool
execute_once) Line 1652 C
  postgres.exe!standard_ExecutorRun(QueryDesc * queryDesc, ScanDirection
direction, unsigned __int64 count, bool execute_once) Line 378 C
  postgres.exe!_SPI_pquery(QueryDesc * queryDesc, bool fire_triggers,
unsigned __int64 tcount) Line 2523 C
  postgres.exe!_SPI_execute_plan(_SPI_plan * plan, ParamListInfoData *
paramLI, SnapshotData * snapshot, SnapshotData * crosscheck_snapshot, bool
read_only, bool fire_triggers, unsigned __int64 tcount) Line 2298 C
  postgres.exe!SPI_execute_plan_with_paramlist(_SPI_plan * plan,
ParamListInfoData * params, bool read_only, long tcount) Line 581 C
  plpgsql.dll!exec_stmt_execsql(PLpgSQL_execstate * estate,
PLpgSQL_stmt_execsql * stmt) Line 4162 C
  plpgsql.dll!exec_stmt(PLpgSQL_execstate * estate, PLpgSQL_stmt * stmt)
Line 2033 C
  [Inline Frame] plpgsql.dll!exec_stmts(PLpgSQL_execstate * stmts, List *)
Line 1924 C
  plpgsql.dll!exec_stmt_block(PLpgSQL_execstate * estate,
PLpgSQL_stmt_block * block) Line 1865 C
  plpgsql.dll!exec_stmt(PLpgSQL_execstate * estate, PLpgSQL_stmt * stmt)
Line 1957 C
  plpgsql.dll!plpgsql_exec_function(PLpgSQL_function * func,
FunctionCallInfoBaseData * fcinfo, EState * simple_eval_estate, bool
atomic) Line 590 C
  plpgsql.dll!plpgsql_call_handler(FunctionCallInfoBaseData * fcinfo) Line
267 C
  postgres.exe!ExecInterpExpr(ExprState * state, ExprContext * econtext,
bool * isnull) Line 626 C
  [Inline Frame] postgres.exe!ExecEvalExprSwitchContext(ExprState *) Line
307 C
  postgres.exe!ExecProject(ProjectionInfo * projInfo) Line 351 C
  [Inline Frame] postgres.exe!ExecProcNode(PlanState *) Line 239 C
  postgres.exe!ExecutePlan(EState * estate, PlanState * planstate, bool
use_parallel_mode, CmdType operation, bool sendTuples, unsigned __int64
numberTuples, ScanDirection direction, _DestReceiver * dest, bool
execute_once) Line 1652 C
  postgres.exe!standard_ExecutorRun(QueryDesc * queryDesc, ScanDirection
direction, unsigned __int64 count, bool execute_once) Line 378 C
  postgres.exe!PortalRunSelect(PortalData * portal, bool forward, long
count, _DestReceiver * dest) Line 931 C
  postgres.exe!PortalRun(PortalData * portal, long count, bool isTopLevel,
bool run_once, _DestReceiver * dest, _DestReceiver * altdest, char *
completionTag) Line 777 C
  postgres.exe!exec_execute_message(const char * portal_name, long
max_rows) Line 2098 C
  postgres.exe!PostgresMain(int argc, char * * argv, const char * dbname,
const char * username) Line 4299 C
  postgres.exe!BackendRun(Port * port) Line 4432 C
  postgres.exe!SubPostmasterMain(int argc, char * * argv) Line 4955 C
  postgres.exe!main(int argc, char * * argv) Line 216 C
  [External Code]

On Tue, Feb 4, 2020 at 11:40 AM Adrian Klaver 
wrote:

> On 2/4/20 6:20 AM, Doug Roberts wrote:
> >> So how did containers_reset_recirc() come to clash with
> >> containers_add_update()?
> >
> > They are clashing because another portion of our system is running and
> > updating containers. The reset recirc function was run at the same time
> > to see how our system and the database would handle it.
>
> So does your system have the things Tom mentioned below?:
>
> "The known bugs in that area
> require either before-row-update triggers on the table, or
> child tables (either partitioning or traditional inheritance).
> So I wonder what the schema of table "containers" looks like."
>
> >
> > The recirc string is formatted like 2000=3,1000=6,5000=0. So the reset
> > recirc function with take a UID (1000 for example) and use that to
> > remove 1000=x from all of the recirc counts for all of the containers
> > that have 1000=x.
> >
> > We are currently using PG 12.0.
> >
> > Thanks,
> >
> > Doug
> >
> > On Mon, Feb 3, 2020 at 6:21 PM Tom Lane  > 

Re: Postgres Crashing

2020-02-04 Thread Tom Lane
Doug Roberts  writes:
> Hopefully the following stack trace is more helpful.

> Exception thrown at 0x000140446403 in postgres.exe: 0xC005: Access
> violation reading location 0xFFF8. occurred

>> postgres.exe!pfree(void * pointer) Line 1033 C
>   postgres.exe!tts_buffer_heap_clear(TupleTableSlot * slot) Line 653 C
>   [Inline Frame] postgres.exe!ExecClearTuple(TupleTableSlot *) Line 428 C
>   postgres.exe!ExecForceStoreHeapTuple(HeapTupleData * tuple,
> TupleTableSlot * slot, bool shouldFree) Line 1448 C
>   postgres.exe!ExecBRUpdateTriggers(EState * estate, EPQState * epqstate,
> ResultRelInfo * relinfo, ItemPointerData * tupleid, HeapTupleData *
> fdw_trigtuple, TupleTableSlot * newslot) Line 3117 C

Ah, so you *are* using before-row update triggers.  Almost certainly,
this is the same bug fixed by commit 60e97d63e, so you should be okay
if you update to 12.1.  (There are some related issues that will be
fixed in 12.2, due out next week.)

regards, tom lane




Better documentation for schema changes in logical replication

2020-02-04 Thread Mike Lissner
Hi all,

I've been using logical replication for about a year now, and I wonder if
there's any sense that it needs better documentation of schema changes. My
experience is that there's almost no documentation and that there are lots
of opportunities to really screw things up.

It seems like starting somewhere would be good. I'd propose an outline
something like the following:

1. General rules of replication schema changes (do we ALTER the
SUBSCRIPTION to DISABLE it first?) What types of best practices do we have?

2. How to do basic things like add/remove a column/table, etc

3. Particular things that cause issues like making a field NULLable. I'm
sure there are a handful of these I haven't run into yet.

My current practice is to set up logical replication across two docker
images and test things out before doing it in production, but every time I
do so I learn something new, despite having carefully read the
documentation. Here's an example of me trying to figure out how to DROP
COLUMNs:

https://github.com/freelawproject/courtlistener/issues/1164

Is this something others think should be improved? I'm not sure I'm
qualified, though I'm keeping a lot of notes about my tests, as above.

Mike


Re: Postgres Crashing

2020-02-04 Thread Doug Roberts
Seems to be working fine now that I've upgraded to 12.1. I'll keep an eye
out for 12.2. However, we are not using a before row update trigger. We are
using an after insert trigger on the containers table though.

Thanks,

Doug

On Tue, Feb 4, 2020 at 2:34 PM Tom Lane  wrote:

> Doug Roberts  writes:
> > Hopefully the following stack trace is more helpful.
>
> > Exception thrown at 0x000140446403 in postgres.exe: 0xC005:
> Access
> > violation reading location 0xFFF8. occurred
>
> >> postgres.exe!pfree(void * pointer) Line 1033 C
> >   postgres.exe!tts_buffer_heap_clear(TupleTableSlot * slot) Line 653 C
> >   [Inline Frame] postgres.exe!ExecClearTuple(TupleTableSlot *) Line 428 C
> >   postgres.exe!ExecForceStoreHeapTuple(HeapTupleData * tuple,
> > TupleTableSlot * slot, bool shouldFree) Line 1448 C
> >   postgres.exe!ExecBRUpdateTriggers(EState * estate, EPQState * epqstate,
> > ResultRelInfo * relinfo, ItemPointerData * tupleid, HeapTupleData *
> > fdw_trigtuple, TupleTableSlot * newslot) Line 3117 C
>
> Ah, so you *are* using before-row update triggers.  Almost certainly,
> this is the same bug fixed by commit 60e97d63e, so you should be okay
> if you update to 12.1.  (There are some related issues that will be
> fixed in 12.2, due out next week.)
>
> regards, tom lane
>


Re: Restrict connection from pgadmin.

2020-02-04 Thread raf
Tom Lane wrote:

> Pawan Sharma  writes:
> > Yes I will show the pgadmin in stat_activity but how  can block that..
> > I think I need to create a job to terminate the pgadmin connections and
> > schedule it for  every 5 min and so that I will check any new connections
> > from pgadmin.
> 
> I think onlookers are still completely mystified as to why you consider
> this a useful activity.
> 
> pgadmin is nothing but a GUI.  Whatever can be done through it can be
> done equally well through psql, or any other client software.  So if
> you're looking for security against unwanted SQL commands, you're going
> about it the wrong way (better to look at preventing logins of privileged
> accounts, and/or use of SQL permissions to limit what can be done).
> If your objective is something else, you haven't explained what that is.
> 
>   regards, tom lane

You could give normal/application users/roles very
limited permissions (i.e. just the ability to execute
pre-existing security-defining functions and nothing
else), and have a database owner user/role with all the
permissions to create those functions. That's my insane
setup and I love it. Then use pg_hba.conf to limit
which IP addresses the database owner user/role can log
in from. You could also uninstall pg_admin. :-)

Also, if you are worried about "doing a lot of damage
in a minute", always script everything and test it
first either in a transaction that will rollback or on
a test server before executing it in production. I
don't think a GUI is suitable for this.

cheers,
raf





The best way to solve a problem

2020-02-04 Thread Nikolai Lusan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Greeting database brains,

Although I started my career with a short lived role as a database
administrator, and over the years have created a few small databases of my
own, I normally do most of my database things on DB's that have been
created by other people - and even then rarely mess with the internals.

However I have a small problem that I can't figure the best way to solve. I
am a member of a small sporting association that I am doing some technical
stuff for, part of which is designing and implementing a DB for membership
and scoring records. The logic required in application to do the scoring
system is under control, as is the few tables required for tracking
memberships. The problem I have is tables for the scoring. Each year there
are multiple competitions run, over multiple grades. The first team round
robin competition of each year is used to determine which teams play in
which grades for the main season, and to handicap players based on a
win/loss percentage. I need to keep data around for historical purposes (I
know this means ever increasing storage issues), but some things (like team
home locations) can be kept in a table that has the data changed at the
beginning of each competition. There are also two different types of
singles competitions run each year. All players are members of the
association, so putting all in one DB makes sense to me, since I can then
easily reference players by their membership numbers (or the sequence
number that identifies their entry in the main membership table). I am not
sure if I should be setting up some kind of template table for the various
competition types and then using it to create either new tables, or new
schemas, for each competition (something named like
__). I need to be able to track player
statistics through each round they play (the number of rounds in each
competition varies depending on the number of teams/individuals playing)
for a variety of reasons, and for the team competitions I need to be able
to track the results of each round, and the number of games played/won/lost
in each round. I would like a solution where I can easily remove historical
data should the committee decide that they only want to keep X years of
records for these competitions.

My initial thought was to put the tables for each competition into a
separate schema, but I'm not sure if this is the right way to go. In fact
I'm not entirely sure I am approaching this whole thing in the right way.

Does anyone have any ideas that could help me create an elegant solution to
this in a Postgres DB?

- -- 
Nikolai Lusan 
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEVfd4GW6z4nsBxdLo4ZaDRV2VL6QFAl46DhUACgkQ4ZaDRV2V
L6S2yQ/7BneZvFaVAHhaM0Yb9Ttr9W73iBau2nZryfgo0yYNL8zdJlC89gMqese5
oLSJA32kuAo/v7G2RA7O+4UYI4/Jou2cHZNUQh17u+B88FQ/vxE96w2Fge8q+h/t
hF5C8DObEnuNwfJGzi1VpIpHlyQicD9C2nD0skRLgBvLjQXHiG8SOW7+SBd5uo/r
XrsBgr/fuDQM8hEm/FtHNbspUwXMF0Yrwn5so2EqvwbHhS/By0I0TC+2/77Vawkx
008hKadAXc746tc56HH3nyAd7cUhWxNmLVRtMUc5feylIJEEYcMpf7Ybyo5pgv3T
9IO2+dCl4zL3wgolkjMjt5ofHlIkA0fPKVFrsaaROFNgtwiKx1KvSrmA5qBIzjjW
36mLH4kVjVOEhu35F4J9kGLgXbkfgqTQGRn/AdxUN5RagqJSVRkv+dxNyRmjIIHe
qDqmJg5G9s6K2Vt+/TU0RRAGIXbYy0dj+ZLX1DSpsFuqr90935IH1OlVhO2sD2z9
gX6MtgQ8zVfcn+Omj5rvcNUkIpw9hP+lYMTfBU8CW1IOHhxTaif9HOW7sqF/IrOQ
3gg2FYCsqGeGgTdrT8n4+4EFImNThNhd9oR6aZPvZ0b7Mos+X7/Mwm4gCTpZQCXU
xhBpiVOErIG+DcVhjVnahLpapew+JboOWr0kwb0BIlKYxtgFrAw=
=aIeT
-END PGP SIGNATURE-





Re: The best way to solve a problem

2020-02-04 Thread David G. Johnston
On Tue, Feb 4, 2020 at 5:36 PM Nikolai Lusan  wrote:

> I
> am a member of a small sporting association that I am doing some technical
> stuff for, part of which is designing and implementing a DB for membership
> and scoring records.
>

[...]
The rest of that planning seems like a significant case of premature
optimization.  PostgreSQL has built-in partitioning now but even that seems
like an unnecessary addition to your data model at this point.  Trying to
create it manually using schemas is something you probably should just be
discarded.

A matches table with adequate category fields to classify the type of game
being played and its outcome seems like it should be sufficient.  The
business logic you describe is something that queries on that table can
solve.  Reading the detail a bit more you probably want a "match" table and
a "match_result" table so you can pre-load matches that you know are going
to happen and then insert a corresponding record with the outcome once the
match is complete.  That said, NULL can be useful is this limited situation
as well.

Your needs regarding historical data are not fully clear but you can
generally consider either discarding old information or copying current
data into the match table so that you archive the known values at the time
the match took place.  Again, I wouldn't worry about the duplication onto a
read-only table or the space that it will take - your dataset size doesn't
seem like it will be large enough to matter.

You can always make changes later as new requirements are added or
constraints such as size become more important.

David J.


Re: The best way to solve a problem

2020-02-04 Thread Nikolai Lusan
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hey,

On Tue, 2020-02-04 at 18:14 -0700, David G. Johnston wrote:
> On Tue, Feb 4, 2020 at 5:36 PM Nikolai Lusan  wrote:
> 
> > I
> > am a member of a small sporting association that I am doing some
> > technical
> > stuff for, part of which is designing and implementing a DB for
> > membership
> > and scoring records.
> > 
> 
> [...]
> The rest of that planning seems like a significant case of premature
> optimization.  PostgreSQL has built-in partitioning now but even that
> seems
> like an unnecessary addition to your data model at this point.  Trying to
> create it manually using schemas is something you probably should just be
> discarded.

This is good to know, I have been using PGModeler to build and verify the
DB. I was putting things in schemas as a form of logical separation for the
next person who comes along to admin this solution (or migrate data from it
to another solution) sometime down the track.


> A matches table with adequate category fields to classify the type of
> game
> being played and its outcome seems like it should be sufficient.  The
> business logic you describe is something that queries on that table can
> solve.  Reading the detail a bit more you probably want a "match" table
> and
> a "match_result" table so you can pre-load matches that you know are
> going
> to happen and then insert a corresponding record with the outcome once
> the
> match is complete.  That said, NULL can be useful is this limited
> situation
> as well.

A bit more disclosure, this is for an Eight Ball (pool) association. The
team based matches are a 16 frame round robin match (nominally 4 players
per team, but there is the possibility of up to 6 players used by a team in
any given match). The 2 forms of singles are round robin (potentially with
multiple pools of players and seeding of players). Having match data for a
specific division is essential, as are the individual player stats.


> Your needs regarding historical data are not fully clear but you can
> generally consider either discarding old information or copying current
> data into the match table so that you archive the known values at the
> time
> the match took place.  Again, I wouldn't worry about the duplication onto
> a
> read-only table or the space that it will take - your dataset size
> doesn't
> seem like it will be large enough to matter.

With about 300 players per year and 26 team rounds + 7 singles tournaments
the possibility of it growing is there, but I agree not really large enough
to matter. Having data around for previous years tournaments is of use if a
player stops playing for a while and then needs to be handicapped again,
also for records on the website. This is why I was thinking of putting
things in one table per tournament per year (possibly two tables for
keeping track of teams and fixture dates). I found a stackexchange post
from a person looking to do something similar in MySQL that a total of 3
tables - but I think I want something more extensive than this.


> You can always make changes later as new requirements are added or
> constraints such as size become more important.

Yeah, but I am doing some forward planning right now. Some features done
now won't be used for a while (there are a couple of older committee
members resistant to change, and some others who want to come into the 21st
century.).

- -- 
Nikolai Lusan 
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEEVfd4GW6z4nsBxdLo4ZaDRV2VL6QFAl46bCoACgkQ4ZaDRV2V
L6QkBA//Tx/lgiIC8qbXqFPQ2UjeOdNwoO3ArknHCy8Mxwtiy7rtX6sfNxr206jr
wfFpAENuoBapssrYbVyyEvGzV5fIxwPYZAb71PxA7MtyW5m9RZKgiQlKIXam5gXc
NVjoT35KGsjlavD5skq8aqQNblJBZ7fFeWh8KpolliVrahh9umO6JEFuq/NK79PY
WF73qKGhy24ulHQsfxQFEvw3BWwWN7l9Xk4zFJbzFrni2XZNQxOg76k67RQJX8rC
1LxIJcCEHo1mWMpBAul64705OEZmjPH71f27yBKM2gDSgpDbdHlP2QxGAooYjZ8y
CmTZ5fmU3e87T4mHIFBkPNH2jhPZ50C4c3l90TEjdGarmVSvmXoGaIW7K7B07P+8
1WMU/a03UViEHZbD4idXSmmmr6oFdqdI55mg+72B2EWtjbu+2Cp58x7gD20KT2nC
ukDqd4+kZSEtRCgyICenCwI1Zt3nlMS7jPFSZpbvtzCLbFj0XmM2Hft02IIvo4E1
3J6wKmq+yQ/u4uT7g4iXTVUFR5WSNLQ2m3DbIjBjDYfQC/hsCcwk6MNC58Gp+j7U
gJU4i3BntdSf8pKpGjoYnl1N7qqQLlFEQE0oC0iCCkCcyd5/bx9efEiJOcexjJdn
x82QEWhFj9VlI5MDFH/Qdk0SrVl3hSwb8I3moON60iqSbDnovtE=
=vHp0
-END PGP SIGNATURE-