Can I disable sslcert/sslkey in pg_service.conf?

2018-06-14 Thread George Woodring
We currently use pg_service.conf and certificates to log into our
databases.  Users have their own ~/.postgresql/postgresql.[crt|key] and
everything is happy.

We are testing a cloud based postgres which is requiring password based
authentication, however our existing certificates are causing the cloud
server to reject SSL connections.

[woodring@ibeam]$ psql -U woodring -h host1 "dbname=mydb sslmode=require"
psql: SSL error: tlsv1 alert unknown ca

I can make it work from the command line:

[woodring@ibeam]$ psql -U woodring -h host1 "dbname=mydb sslmode=require
sslcert= sslkey="
Password for user woodring:
psql (9.5.13, server 9.6.3)
WARNING: psql major version 9.5, server major version 9.6.
 Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: DHE-RSA-AES256-GCM-SHA384, bits:
256, compression: off)
Type "help" for help.

mydb=> \q

If I put the blanks into pg_service.conf:

[mydb]
dbname=mydb
host=host1
sslmode=require
sslcert=
sslkey=

It does not work.

[woodring@ibeam]$ PGSERVICE=mydb psql
psql: SSL error: tlsv1 alert unknown ca

I tried the opposite of moving the .postgresql directory to a different
name and putting a hard coded certificate path in pg_service, but it looks
to have its own sets of challenges.

Thanks for any suggestions.
George Woodring

iGLASS Networks
www.iglass.net


How to make transaction delete see data from a just completed concurrent transaction?

2019-08-05 Thread George Woodring
We have been using the model of updating certain data in a table of

begin;
delete from foo where bar='myfoo';
insert into foo all of the correct data for myfoo;
commit;

Our thinking was that if you had two running at close to the same time, the
first transaction would finish and then the second one would run making the
table consistent.  However this is not the case.  The second transaction is
not deleting anything and we are getting double the inserted data.  Our
guess is that the second transaction is trying to delete the data from the
start of the transaction, not from when the lock on the table is released,
and that data is already gone from the first transaction.

Is there a way to make the delete re-plan to see the data inserted by the
first transaction when the delete is allowed to continue?

The following is what I was using as a test case;

CREATE TABLE woody ( id serial,
 constraint woody_pkey primary key (id),
 mynumber int4,
 myname  varchar
 );
CREATE INDEX myname_INDEX ON woody (myname);

INSERT INTO woody (mynumber, myname) SELECT generate_series(1, 1000),
'woody';

I then placed the following into a file called woody2.sql
BEGIN;
DELETE from woody WHERE myname='woody';
INSERT INTO woody (mynumber, myname) SELECT generate_series(1, 1000),
'woody';

connection1
iss=> \i woody2.sql
BEGIN
DELETE 1000
INSERT 0 1000

connection2
iss=> \i woody2.sql
BEGIN

connection1
iss=> commit;
COMMIT

connection2
DELETE 0
INSERT 0 1000
iss=> commit;
COMMIT
iss=> select count(*) from woody where myname='woody';
 count
---
  2000
(1 row)


Thanks,
George
iGLASS Networks
www.iglass.net


Re: export to parquet

2020-08-26 Thread George Woodring
I don't know how many hoops you want to jump through, we use AWS and Athena
to create them.

   - Export table as JSON
   - Put on AWS S3
   - Create JSON table in Athena
   - Use the JSON table to create a parquet table

The parquet files will be in S3 as well after the parquet table is
created.  If you are interested I can share the AWS CLI commands we use.

George Woodring
iGLASS Networks
www.iglass.net


On Wed, Aug 26, 2020 at 3:00 PM Scott Ribe 
wrote:

> I have no Hadoop, no HDFS. Just looking for the easiest way to export some
> PG tables into Parquet format for testing--need to determine what kind of
> space reduction we can get before deciding whether to look into it more.
>
> Any suggestions on particular tools? (PG 12, Linux)
>
>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> https://www.linkedin.com/in/scottribe/
>
>
>
>
>
>


Re: Check for duplicates before inserting new rows

2020-09-04 Thread George Woodring
I would suggest creating a temp table based on the original table and
loading the data into it first.  You can then purge the duplicates.

George Woodring
iGLASS Networks
www.iglass.net


On Fri, Sep 4, 2020 at 9:21 AM Rich Shepard 
wrote:

> This is a new issue for me: I've received data from a different source and
> need to add non-duplicates to two tables in the database. Each row in the
> new data has a station ID and associated measurements.
>
> The existing database includes a table for station information and another
> for measurements made there.
>
> I want to learn which stations and which measurements in the new data set
> are not included in the existing tables. And, I don't want to try inserting
> the new data and have postgres tell me when it's found duplicates,
> especially since there are two tables involved.
>
> My research into how to do this has not found a solution so I ask for
> pointers to resources that will teach me how to add these new data to both
> tables.
>
> Regards,
>
> Rich
>
>
>
>


foreign key on delete cascade order?

2022-03-10 Thread George Woodring
When using FOREIGN KEY ON DELETE CASCADE, is there an order the entries are
being deleted?

We are seeing intermittent deadlocks with trying to update a table with the
foreign key entry being deleted.

We have 4 levels of tables chained by foreign keys.

machine -> point -> poll -> status

The status is the only one updated constantly and we are trying to figure
out how to order the update to avoid the deadlock when the machine is
deleted.

Thanks for your help
George
iGLASS Networks
www.iglass.net


Re: foreign key on delete cascade order?

2022-03-10 Thread George Woodring
On Thu, Mar 10, 2022 at 10:56 AM Tom Lane  wrote:

>  Each cascaded delete ought to be removing a disjoint set of rows in the
> referencing

table, so I'm not quite sure why order should matter.
>
> regards, tom lane
>

I have always thought the way to avoid deadlocks was to update rows in the
same order by the different updaters.  Is there a better chain of thought
for updating and deleting rows at the same time?  Do we need to put a lock
on the table to update, then have the delete queue up waiting for the lock
to be removed?

Thanks,
George


Re: foreign key on delete cascade order?

2022-03-11 Thread George Woodring
On Thu, Mar 10, 2022 at 12:38 PM Tom Lane  wrote:

> With the amount of detail you've provided (viz: none)
>

This is an example of the error we are seeing from our application. Sorry,
I cannot find the postgresql log entry for this one.

2020-11-30T13:16:08,835 ERROR [foo/bar/01EF2.W01E/55159]
GlobalControllerAdvice: Caught exception (
https://noc.iglass.net/networkMachDelete.htm
):
org.springframework.dao.DeadlockLoserDataAccessException:
PreparedStatementCallback; SQL [DELETE FROM mach WHERE (machid=?)]; ERROR:
deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""; nested exception is
org.postgresql.util.PSQLException: ERROR: deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""
org.springframework.dao.DeadlockLoserDataAccessException:
PreparedStatementCallback; SQL [DELETE FROM mach WHERE (machid=?)]; ERROR:
deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""; nested exception is
org.postgresql.util.PSQLException: ERROR: deadlock detected
  Detail: Process 27442 waits for ShareLock on transaction 1895244982;
blocked by process 21064.
Process 21064 waits for ShareLock on transaction 1895245026; blocked by
process 27442.
  Hint: See server log for query details.
  Where: while deleting tuple (5,18) in relation "status"
SQL statement "DELETE FROM ONLY "public"."status" WHERE $1
OPERATOR(pg_catalog.=) "pollid""

The tables are involved are
CREATE TABLE mach ( machid serial, constraint mach_pkey primary key
(machid) ... );
CREATE TABLE pollgrpinfo ( pollgrpid serial, constraint pollgrpinfo_pkey
primary key (pollgrpid),
 machidint4 NOT NULL, constraint mach_exists FOREIGN
KEY(machid) REFERENCES mach ON DELETE CASCADE, ... );
CREATE TABLE poll ( pollid serial, constraint poll_pkey primary key
(pollid),
pollgrpid int4 not null, constraint pollgrp_exists FOREIGN
KEY(pollgrpid) REFERENCES pollgrpinfo (pollgrpid) ON DELETE CASCADE, ...);
CREATE TABLE status ( statusid serial, constraint status_pkey primary key
(statusid),
pollid int4 not null, constraint poll_exists FOREIGN KEY(pollid)
REFERENCES poll ON DELETE CASCADE, ...);

We are updating the entire status table every 5 minutes with
BEGIN;
UPDATE status SET () WHERE pollid = $1;
COMMIT;

The issue is arriving when some does a DELETE during the UPDATE of status
DELETE FROM mach WHERE machid=$1;

I don't know if this sheds any more light on it.

George
iGLASS Networks


Re: foreign key on delete cascade order?

2022-03-14 Thread George Woodring
For the update processes, all other tables are read-only reference tables,
I don't think they would be locked.


iGLASS Networks
www.iglass.net


On Fri, Mar 11, 2022 at 10:03 PM Michael Lewis  wrote:

> Could you set lock_timeout, lock table explicitly for SHARE UPDATE
> EXCLUSIVE (pretty sure that would be the proper level), then retry if it
> fails because a delete is already going on?


I don't think I can take a lock on the table, there are multiple process
that update the status table. While each process does not overlap, I think
locking the table would cause them to block each other.
I think we would just have to retry the delete after the deadlock, which is
currently done manually by refreshing the web page.  The update never seems
to be interupted, probably because it longer running and starts before the
random delete.


> Also, are you confident that before you call 'begin' to do the update, you
> are not already in a transaction which might have some lock on row(s) in
> mach, or one of the other tables involved?
>

  For the update processes, all other tables are read-only reference
tables, I don't think they would be locked.

 Thank you for your help.
George


Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
We are having an issue with one of our plpgsql functions after migrating
from 9.3 to 9.6.  The function works fine until you change the search path.

psql (9.6.10)
Type "help" for help.

woody=> select ticket_summary(8154);
  ticket_summary
---
 {58451,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
(1 row)

woody=> set search_path to "iss-hackers", public;
SET
woody=> select ticket_summary(8154);
ERROR:  invalid input syntax for type timestamp with time zone: "woodring"
CONTEXT:  PL/pgSQL function ticket_summary(integer) line 11 at SQL statement

It is confused which column is which after the change.

The tables used by the function are:
public.tickets - A table with 3 child tables
iss-hackers.tickets - A view of public.tickets with a where clause.
public.followups - A table with 3 child tables.

CREATE OR REPLACE FUNCTION ticket_summary(int4) RETURNS int8[] AS $$
   DECLARE
  tid ALIAS FOR $1;
  cstate public.followups.state%TYPE := 1;
  ticket public.tickets%ROWTYPE;
  followup public.followups%ROWTYPE;
  summary int8[] := '{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}';
  lastdate public.followups.date%TYPE;
   BEGIN
  SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
  IF NOT FOUND THEN
 return summary;
  END IF;
  lastdate := ticket.opendate;
  FOR followup IN SELECT * FROM public.followups WHERE ticketsid=tid
AND state IS NOT NULL ORDER BY date LOOP
 summary[cstate] := summary[cstate] + extract( EPOCH FROM
(followup.date - lastdate))::int;
 cstate := followup.state;
 lastdate := followup.date;
  END LOOP;
  summary[cstate] := summary[cstate] + extract( EPOCH FROM
(current_timestamp - lastdate))::int;
  RETURN summary;
   END;
$$ LANGUAGE plpgsql;

I assume I can fix this by putting the function into each of the schemas,
but I thought I would ask opinions before doing so.

 Thanks,
George Woodring
iGLASS Networks
www.iglass.net


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
I think the issue is that the function is not putting the data into the
tickets%ROWTYPE correctly.  When I do \d on public.tickets and
iss-hackers.tickets, the columns are in a different order.

\d public.tickets
Column|   Type   |
Modifiers
--+--+-
 ticketsid| integer  | not null default
nextval('tickets_ticketsid_seq'::regclass)
 opendate | timestamp with time zone | default now()
 state| smallint | default 1
 opentech | character varying(50)|
 priority | smallint | default 10
 severity | smallint | default 30
 problem  | character varying(300)   |
 summary  | text |
 parent   | integer  |
 remed| boolean  | default false
 remed2   | boolean  | default false
 remed_hstart | timestamp with time zone |
 autoclean| boolean  | default false
 remoteid | character varying|
 remotesync   | timestamp with time zone |
 sla_time | interval |
 sla_alarm| boolean  |

\d iss-hackers.tickets
 View "iss-hackers.tickets"
Column|   Type   | Modifiers
--+--+---
 ticketsid| integer  |
 opentech | character varying(50)|
 summary  | text |
 parent   | integer  |
 opendate | timestamp with time zone |
 priority | smallint |
 problem  | character varying(300)   |
 autoclean| boolean  |
 state| smallint |
 severity | smallint |
 remed| boolean  |
 remed2   | boolean  |
 remoteid | character varying|
 remotesync   | timestamp with time zone |
 sla_time | interval |
 sla_alarm| boolean  |
 remed_hstart | timestamp with time zone |
 tableoid | oid  |


The error message is saying column2 is not a timestamp, which the public
table is a timestamp for column2.  If I change my SELECT in the function
from SELECT * to SELECT opendate  I can fix my issue easily.

George
iGLASS Networks
www.iglass.net


On Tue, Nov 13, 2018 at 8:44 AM Pavel Stehule 
wrote:

> Hi
>
> Ășt 13. 11. 2018 v 14:18 odesĂ­latel George Woodring <
> george.woodr...@iglass.net> napsal:
>
>> We are having an issue with one of our plpgsql functions after migrating
>> from 9.3 to 9.6.  The function works fine until you change the search path.
>>
>> psql (9.6.10)
>> Type "help" for help.
>>
>> woody=> select ticket_summary(8154);
>>   ticket_summary
>> ---
>>  {58451,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
>> (1 row)
>>
>> woody=> set search_path to "iss-hackers", public;
>> SET
>> woody=> select ticket_summary(8154);
>> ERROR:  invalid input syntax for type timestamp with time zone: "woodring"
>> CONTEXT:  PL/pgSQL function ticket_summary(integer) line 11 at SQL
>> statement
>>
>> It is confused which column is which after the change.
>>
>> The tables used by the function are:
>> public.tickets - A table with 3 child tables
>> iss-hackers.tickets - A view of public.tickets with a where clause.
>> public.followups - A table with 3 child tables.
>>
>> CREATE OR REPLACE FUNCTION ticket_summary(int4) RETURNS int8[] AS $$
>>DECLARE
>>   tid ALIAS FOR $1;
>>   cstate public.followups.state%TYPE := 1;
>>   ticket public.tickets%ROWTYPE;
>>   followup public.followups%ROWTYPE;
>>   summary int8[] := '{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
>> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}';
>>   lastdate public.followups.date%TYPE;
>>BEGIN
>>   SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
>>   IF NOT FOUND THEN
>>  return summary;
>>   END IF;
>>   lastdate := ticket.opendate;
>>   FOR followup IN SELECT * FROM public.followups WHERE ticketsid=tid
>> AND state IS NOT NULL ORDER BY date LOOP
>>  summary[cstate] := summary[cstate] + extract( EPOCH FROM
>> (followup.date - lastdate))::int;
>>  cstate := followup.state;
>>  lastdate := followup.date;
>>   END LOOP;
>>   summary[cstate] := summary[cstate] + extract( EPOCH FROM
>> (current_timestamp - 

Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
The tickets view restricts which tickets can be seen by the schema.

9.3 must have created the view in the same column order as the table (which
is the case looking at one of our 9.3 databases which we have not updated
yet), which is why we never saw the issue before.

George
iGLASS Networks
www.iglass.net


On Tue, Nov 13, 2018 at 9:46 AM Adrian Klaver 
wrote:

> On 11/13/18 6:27 AM, George Woodring wrote:
> > I think the issue is that the function is not putting the data into the
> > tickets%ROWTYPE correctly.  When I do \d on public.tickets and
> > iss-hackers.tickets, the columns are in a different order.
> >
>
> >
> > The error message is saying column2 is not a timestamp, which the public
> > table is a timestamp for column2.  If I change my SELECT in the function
> > from SELECT * to SELECT opendate  I can fix my issue easily.
>
> Or change this:
>
>   SELECT * INTO ticket FROM tickets WHERE ticketsid=tid;
>
> to:
>
>   SELECT * INTO ticket FROM public.tickets WHERE ticketsid=tid;
>
> This will match the ROWTYPE:
>
> ticket public.tickets%ROWTYPE;
>
> >
> > George
> > iGLASS Networks
> > www.iglass.net <http://www.iglass.net>
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
CREATE OR REPLACE VIEW tickets AS
 SELECT *, tableoid
   FROM public.tickets
  WHERE ( ticketsid IN (SELECT ticketsid
FROM machtick));

iGLASS Networks
www.iglass.net


On Tue, Nov 13, 2018 at 10:32 AM Adrian Klaver 
wrote:

> On 11/13/18 6:54 AM, George Woodring wrote:
> > The tickets view restricts which tickets can be seen by the schema.
> >
> > 9.3 must have created the view in the same column order as the table
> > (which is the case looking at one of our 9.3 databases which we have not
> > updated yet), which is why we never saw the issue before.
>
> What is the view definition?
>
> >
> > George
> > iGLASS Networks
> > www.iglass.net <http://www.iglass.net>
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread George Woodring
I apologize for top posting,  Google hid all of the other stuff.

George
iGLASS Networks
www.iglass.net



>>
>>


Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread George Woodring
We are running 9.6.8-10 on our five postgres clusters.
We have ~25 database per cluster.
We have two servers that update a table in each database.

During the update process we were running a pgpsql function that calculated
the difference in the new value with the previous.  This function would do
a SELECT to get the previous value from the table and do the calculation
before the UPDATE.  We also have an AFTER TRIGGER to save the row into a
logging table.

We recently changed the process to do a BEFORE TRIGGER to do the
calculation between OLD and NEW instead of the separate function with the
SELECT.

After doing this, CPU on our two client servers went crazy.  CPU on the
database servers look unchanged.  Rolling back this change fixed our client
CPU issue.

Anyone know of a reason why this change would affect the client?  I would
think that this would only affect the server side.  I could not see any
messages printed out anywhere in logs showing any type of error.

Thanks,
George Woodring
iGLASS Networks
www.iglass.net


Re: Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread George Woodring
Adrian Klaver  writes:
> What is the client server and what is it doing?

Our client creates a temp table
CREATE TEMP TABLE myraw(LIKE raw INCLUDING DEFAULTS)
We load data with a copy
COPY myraw (transferid, pollgrpid, date, data, rrdtypeid, error ) FROM
STDIN WITH DELIMITER '|'
We do some checking on the data in the temp table
SELECT rawdate, raw_maint(rawdate) AS checked FROM (SELECT
date_trunc('minute', date) AS rawdate FROM myraw GROUP BY rawdate) AS foo;
Then we save the data into the permanent location
SELECT transferid, pollgrpid, process_lastpoll(ROW(myraw.*)::raw) AS
processed FROM myraw ORDER BY transferid

Our change was in the process_lastpoll() function.  We went from
  IF cntr THEN
 oval := calc_last_cntr(ptype, pgid, ds, pdate, data);
  ELSE
 oval := data;
  END IF;

  -- Insert data into table
  -- RAISE DEBUG 'Process: dsnum=%, polltype=%, cntr?=%, pdate=%,
data=%, oval=%', dsnum, ptype, cntr, pdate, data, oval;
  UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate,
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;
  IF found THEN
 RETURN true;
  END IF;
  BEGIN
 INSERT INTO lastpoll (timeslot, pollgrpid, dsnum, polltype,
lasttime, lastval, outval, error) VALUES (ts, pgid, ds, ptype, pdate, data,
oval, err);
 RETURN true;
  EXCEPTION WHEN unique_violation THEN
 -- Don't do anything
  END;
to this
  IF NOT cntr THEN
 oval := data;
  END IF;

  -- Insert data into table
  -- RAISE DEBUG 'Process: dsnum=%, polltype=%, cntr?=%, pdate=%,
data=%, oval=%', dsnum, ptype, cntr, pdate, data, oval;
  UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate,
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;
  IF found THEN
 RETURN true;
  END IF;
  BEGIN
 INSERT INTO lastpoll (timeslot, pollgrpid, dsnum, polltype,
lasttime, lastval, outval, error) VALUES (ts, pgid, ds, ptype, pdate, data,
oval, err);
 RETURN true;
  EXCEPTION WHEN unique_violation THEN
 -- Don't do anything
  END;

The calc_last_cntr() was replaced with the trigger.
from:
CREATE OR REPLACE FUNCTION calc_last_cntr(ptype text, pgid int4, ds int4,
pdate timestamp with time zone, data double precision) RETURNS double
precision AS $$
   DECLARE
  mcurr RECORD;
  res double precision;
  dwindow int4;
   BEGIN

  SELECT lasttime, lastval INTO mcurr FROM lastpoll WHERE
pollgrpid=pgid AND dsnum=ds;
  IF mcurr IS NULL THEN
 -- RAISE DEBUG 'No previous data found for pgid=%', pgid;
 return NULL;
  END IF;

  -- Calculate the counter rate
  -- Check for div by 0
  dwindow := EXTRACT(EPOCH FROM (pdate - mcurr.lasttime));
  IF dwindow = 0 THEN
 RAISE WARNING '(%) Time difference is zero (% - %)',
current_database(), pdate, mcurr.lasttime;
 return NULL;
  END IF;

  res := (data - mcurr.lastval)/dwindow;
  IF res < 0 THEN
 -- RAISE DEBUG 'Counter loop for pgid=%', pgid;
 return data/dwindow;
  END IF;

  return res;
   END;
$$ LANGUAGE plpgsql;

to:
CREATE OR REPLACE FUNCTION cntr_lastpoll() RETURNS trigger AS $$
   DECLARE
  dwindow int4;
   BEGIN
  -- Log new value into history if has a date
  IF NEW.outval IS NULL THEN
 -- Calculate counter value for this DS.
 dwindow := EXTRACT(EPOCH FROM (NEW.lasttime - OLD.lasttime));
 IF dwindow = 0 THEN
RAISE WARNING '(%) Time difference is zero (% - %)',
current_database(), NEW.lasttime, OLD.lasttime;
return NEW;
 END IF;

 NEW.outval := (NEW.lastval - OLD.lastval)/dwindow;
 IF NEW.outval < 0 THEN
NEW.outval := NEW.lastval/dwindow;
 END IF;

  END IF;
  RETURN NEW;
   END;
$$ LANGUAGE plpgsql;

-- Create Trigger to calculate counter values if needed.
CREATE TRIGGER lastpoll_counter BEFORE UPDATE ON lastpoll
   FOR EACH ROW EXECUTE PROCEDURE cntr_lastpoll();

On Fri, Jan 18, 2019 at 2:01 PM Tom Lane  wrote:

>  what uses that value client-side, exactly?
>

The outval number is later read looking for values outside of a
thresholds.  I did not think about it affecting selects later in the code.
All in all everything worked fine, but since the client box is an AWS EC2
instance, it started eating all of our CPU credits.


[image: image.png]


iGLASS Networks
www.iglass.net



>
>


Clarification on PL/pgSQL query plan caching

2017-12-15 Thread George Woodring
We were experiencing insert slowdowns at the beginning of the day when we
add new tables.  As part of our data insert process, we have a read
function and we decided to modify it to use EXECUTE to avoid plan caching.
Our assumption was was the adding the table would invalidate the plan for
the current running connection.

This fixed our issue, but the surprising side affect is that it took 100
sec off of our runtime at the other parts of the day.

I have added the before and after examples, I am wondering on the *why*
writing it in the before example is bad?  Function only has one plan?

BEFORE:
  IF ptype = 'data' THEN
 SELECT lasttime, lastval INTO mcurr FROM d_current WHERE lasttime
> pdate - '1 hour'::interval AND id = pid ORDER BY lasttime DESC LIMIT 1;
  ELSIF ptype = 'idata' THEN
 SELECT lasttime, lastval INTO mcurr FROM c_current WHERE
lasttime > pdate - '1 hour'::interval AND id = pid ORDER BY lasttime DESC
LIMIT 1;
  ELSIF ptype = 'ddata' THEN
 SELECT lasttime, lastval INTO mcurr FROM c_current WHERE lasttime
> pdate - '1 hour'::interval AND id = pid  ORDER BY lasttime DESC LIMIT 1;
  ELSIF ptype = 'ldata' THEN
 SELECT lasttime, lastvall INTO mcurr FROM dl_current WHERE
lasttime > pdate - '1 hour'::interval AND id = pid ORDER BY lasttime DESC
LIMIT 1;
  ELSE
 SELECT lasttime, lastval INTO mcurr FROM current WHERE lasttime >
pdate - '1 hour'::interval AND id = pid ORDER BY lasttime DESC LIMIT 1;
  END IF;
  IF NOT FOUND THEN
 -- RAISE DEBUG 'No previous data found for pgid=%', pgid;
 return NULL;
  END IF;

AFTER:
  limiter timestamptz := pdate - '1 hour'::interval;

  IF ptype = 'data' THEN
 table_name := 'd_current';
  ELSIF ptype = 'ldata' THEN
 table_name := 'dl_current';
  ELSIF ptype = 'idata' THEN
 table_name := 'c_current';
  ELSIF ptype = 'ddata' THEN
 table_name := 'c_current';
  END IF;

  EXECUTE 'SELECT lasttime, lastval FROM ' || table_name || ' WHERE
lasttime > $1 AND id = $2 ORDER BY lasttime DESC LIMIT 1' INTO mcurr USING
limiter, pid;
  IF mcurr IS NULL THEN
 -- RAISE DEBUG 'No previous data found for pgid=%', pgid;
 return NULL;
  END IF;

Thanks,
George Woodring
iGLASS Networks
www.iglass.net