Partitioning constraints vs before-trigger

2018-01-18 Thread Nikolay

Hi all,

Could anybody explain, what happens first: constraint check or 
before-trigger execution?


I have a table, partitioned by date:

CREATE TABLE foo
(
  unid text NOT NULL,
  logtime timestamp with time zone NOT NULL,
  size integer,
  CONSTRAINT foo_pkey PRIMARY KEY (unid)
);

-- There is an before-insert trigger which works perfectly, creates a 
new monthly partition if neccessary and inserts new record into the 
partition.

-- Here is how partitions look like:

CREATE TABLE foo_2018_01
(
  CONSTRAINT foo_2018_01_pkey PRIMARY KEY (unid),
  CONSTRAINT foo_2018_01_logtime_check CHECK (logtime >= '2018-01-01 
00:00:00+00'::timestamp with time zone AND logtime < '2018-02-01 
00:00:00+00'::timestamp with time zone)

)
INHERITS (foo);

I cannot change anything in the application, as it's proprietary. So I 
had to do partitioning myself with a trigger.


Now there's a new problem. It looks like the application sometimes do 
UPDATEs to the "logtime" column, which I use for partitioning.


So the application can do something like UPDATE foo SET 
logtime='2017-12-01 00:00:00+00', size=5 WHERE unid='blahblablah', 
althrough this record had logtime='2018-01-18 00:00:00+00' and was in 
different partition.


In such case, I can see the error (and transaction aborts):

ERROR:  new row for relation "foo_2018_01" violates check constraint 
"foo_2018_01_logtime_check"


For business logic, it wouldn't be critical if I forbid/undo/replace 
modification of logtime column. But other columns must be updated by the 
application when neccessary.


Now I need to ignore new value for "logtime" column for every UPDATE to 
table "foo".


Here is my idea:

CREATE OR REPLACE FUNCTION logtime_update_trigger() RETURNS trigger AS
$BODY$
BEGIN
  IF (NEW.logtime != OLD.logtime) THEN
    NEW.logtime := OLD.logtime;
  END IF;
  RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;

CREATE TRIGGER trg_foo_update BEFORE UPDATE ON foo
  FOR EACH ROW EXECUTE PROCEDURE logtime_update_trigger();

Unfortunately, it seems like this trigger is not even being executed and 
I still get the same error:


ERROR:  new row for relation "foo_2018_01" violates check constraint 
"foo_2018_01_logtime_check"


I suppose that's because contraint check is performed before the trigger 
is fired? Is there any workarounds here?


I also tried to create a rule:

CREATE OR REPLACE RULE test_rule AS ON UPDATE TO foo
   WHERE new.logtime <> old.logtime DO INSTEAD
UPDATE foo SET size = new.size WHERE foo.unid = old.unid AND foo.logtime 
= old.logtime;


But then I get recursion error:

ERROR: infinite recursion detected in rules for relation "foo"

Possibly because the recursion analysis doesn't take WHERE condition 
into account.


Any help would be greatly appreciated.

PostgreSQL version: 9.0.1 on CentOS 5 i686.

Best regards, Nikolay Karikh.




Connection type

2018-01-18 Thread Enrico Pirozzi

Hi,

is there a way to know what kind of connection a client is doing? 
(ODBC,JDBC..etc)


I saw the pg_stat_activity view,

but in the application name field there no infomation about
what kind of connection a client is doing.

Thanks

Enrico

e.piro...@nbsgroup.it



Could not read block in file

2018-01-18 Thread Don Seiler
This morning I got some errors from our pre-prod environment. This is
running Pg 9.6.6 on CentOS 7.3.

Just grepping out these errors (and obfuscating sensitive data):
2018-01-18 06:29:21 CST [11912]: [1570-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 39 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:29:42 CST [11959]: [517-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 40 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:30:09 CST [11912]: [1577-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 41 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:30:32 CST [11970]: [162-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 42 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:30:40 CST [11912]: [1583-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 43 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:31:04 CST [11970]: [168-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 44 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:31:31 CST [11959]: [595-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 45 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes

All of these errors occurred during attempts to insert into this liferay
table. For example:

2018-01-18 06:29:21 CST [11912]: [1568-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com LOG:  execute S_14: INSERT INTO
quartz_FIRED_TRIGGERS (ENTRY_ID, TRIGGER_NAME, TRIGGER_GROUP, IS_VOLATILE,
INSTANCE_NAME, FIRED_TIME, STATE, JOB_NAME, JOB_GROUP, IS_STATEFUL,
REQUESTS_RECOVERY, PRIORITY) VALUES($1, $2, $3, $4, $5, $6, $7, $8, $9,
$10, $11, $12)
2018-01-18 06:29:21 CST [11912]: [1569-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com DETAIL:  parameters: $1 = 'FOO', $2 = 'Bar
CSVTrigger', $3 = 'DEFAULT', $4 = 'f', $5 = 'FOO', $6 = '123', $7 = 'BLAH',
$8 = NULL, $9 = NULL, $10 = 'f', $11 = 'f', $12 = '5'
2018-01-18 06:29:21 CST [11912]: [1570-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com ERROR:  could not read block 39 in file
"pg_tblspc/16406/PG_9.6_201608131/16416/25690536": read only 0 of 8192 bytes
2018-01-18 06:29:21 CST [11912]: [1571-1]
db=abcprod,user=abcreporting2,app=PostgreSQL JDBC Driver,client=
abcchizapp1.mycompanyname.com STATEMENT:  INSERT INTO quartz_FIRED_TRIGGERS
(ENTRY_ID, TRIGGER_NAME, TRIGGER_GROUP, IS_VOLATILE, INSTANCE_NAME,
FIRED_TIME, STATE, JOB_NAME, JOB_GROUP, IS_STATEFUL, REQUESTS_RECOVERY,
PRIORITY) VALUES($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12)

These are the only occurrences of these errors. I've since been able to
query this table as well as take a pg_dump of the schemas on this
tablespace, with no errors either time. So I can't seem to duplicate this
problem. The app has since successfully performed many, many inserts into
that table since these errors.

I don't see any errors in /var/log/messages that would incidate any
filesystem issues either. Obviously the worry is for possible corruption.
Granted this is "only" pre-prod but there will be concerns from the
business side prior to upgrading our prod DB (currently 9.2.22).

What else can/should I check here to make sure there isn't something wrong
with this database cluster?

-- 
Don Seiler
www.seiler.us


Re: Connection type

2018-01-18 Thread Steve Atkins

> On Jan 18, 2018, at 4:06 PM, Enrico Pirozzi  wrote:
> 
> Hi,
> 
> is there a way to know what kind of connection a client is doing? 
> (ODBC,JDBC..etc)
> 
> I saw the pg_stat_activity view, 
> 
> 
> but in the application name field there no infomation about 
> what kind of connection a client is doing.

No. They're all exactly the same as far as postgresql is concerned all speaking 
the
same postgresql native protocol, or close enough. The ODBC / JDBC / libpq 
difference
is purely on the client side.

A client can voluntarily set the application_name, e.g. as part of it's 
connection string,
to identify itself to the server, if you want to be able to identify which sort 
of client
is connected easily.

Cheers,
  Steve




Call sql function in psql

2018-01-18 Thread Abhra Kar
Hi All,

  Please send me some info how to post directly in mailing
list,somwhow I am not able to find out the way so I am mailing here.


Present problem--


  I tried to change below code in psql —



Session sess = (Session) entityManager.getDelegate();

 sess.createSQLQuery("{ call reset_all() }").executeUpdate();



Here 'sess' is  org.hibernate.Session and reset_all() is a function which
contains—



DECLARE

username varchar(30);

BEGIN

select user into username;

if username like 'X%' then

update Y set  = 0;

update  set = 0;

EXECUTE('truncate table abc');

EXECUTE('truncate table def');



else

RAISE NOTICE 'User not allowed to run this procedure';

end if;

end;



I tried to modify like



sess.createSQLQuery("select reset_all()").executeUpdate();



getting error -- org.postgresql.util.PSQLException: A result was returned
when none was expected.



What should be the syntax of calling reset_all().


Thanks and Regards,

Abhra


Re: Call sql function in psql

2018-01-18 Thread Rob Sargent

> On Jan 18, 2018, at 9:22 AM, Abhra Kar  wrote:
> 
> Hi All,
> 
>   Please send me some info how to post directly in mailing 
> list,somwhow I am not able to find out the way so I am mailing here.
> 
> 
Here is fine

> 
> Present problem-- 
> 
> 
> 
>   I tried to change below code in psql —
> 
>  
> Session sess = (Session) entityManager.getDelegate();
> 
>  sess.createSQLQuery("{ call reset_all() }").executeUpdate();
> 
>  
> Here 'sess' is  org.hibernate.Session and reset_all() is a function which 
> contains—
> 
>  
> DECLARE
> 
> username varchar(30);
> 
> BEGIN
> 
> select user into username;
> 
> if username like 'X%' then
> 
> update Y set  = 0;
> 
> update  set = 0;
> 
> EXECUTE('truncate table abc');
> 
> EXECUTE('truncate table def');
> 
> 
> 
> else
> 
> RAISE NOTICE 'User not allowed to run this procedure';
> 
> end if;
> 
> end;
> 
>  
> I tried to modify like
> 
>  
> sess.createSQLQuery("select reset_all()").executeUpdate();
> 
>  
> getting error -- org.postgresql.util.PSQLException: A result was returned 
> when none was expected.
> 
>  
> What should be the syntax of calling reset_all().
> 
> 
> 
> Thanks and Regards,
> 
> Abhra
> 
> 
> 
At the very least change ‘call’ to ‘select’.  Are those braces (“{}”) truly 
necessary? (Been a while since I’ve user hibernate). 




Re: Call sql function in psql

2018-01-18 Thread David G. Johnston
On Thu, Jan 18, 2018 at 9:22 AM, Abhra Kar  wrote:

>
> I tried to modify like
>
>
>
> sess.createSQLQuery("select reset_all()")
>

​This part is correct​



> .executeUpdate();
>
>
>
> getting error -- org.postgresql.util.PSQLException: A result was returned
> when none was expected.
>
>
>
​You need execute the above the same way you would execute "SELECT * FROM
tbl"; then just ignore the result set that is returned.

David J.
​


Re: Connection type

2018-01-18 Thread Enrico Pirozzi



Il 18/01/2018 17:19, Steve Atkins ha scritto:

A client can voluntarily set the application_name, e.g. as part of it's 
connection string,
to identify itself to the server, if you want to be able to identify which sort 
of client
is connected easily.


Thank you very much :)

Regards
Enrico



Possible hang in 10.1 with JSON query over partially indexed partitions

2018-01-18 Thread Paul Jones
Version   10.1, Community version from PGDG repo
OSRHEL 7.3

I may have discovered a situation in 10.1 where EXECUTEing a PREPARED
statement acting on JSON data in partitioned tables hangs in an
infinite loop for a particular set of data.  Unfortunately, the data is
proprietary, so I did the best I could below to describe what happened.

1) Partitioning is done with an (I hope) IMMUTABLE FUNC
2) Each partition has a partial index on IP address
3) The query looks for logins within a certain small time frame
   from the same IP

I compiled a debug version and took 3 backtraces.  When I first discovered
this, I allowed it to sit for 10-15 minutes before interrupting it.
Similar queries for different addresses return within seconds.

 Setup --

CREATE FUNCTION sesstime(tin JSONB)
RETURNS DATE
AS $$
SELECT (substring(tin->>'timestamp' from 1 for 8))::DATE;
$$ LANGUAGE SQL IMMUTABLE;

DROP TABLE sessparts;
CREATE TABLE sessparts
(
sessionJSONB
) PARTITION BY RANGE (sesstime(session));

CREATE TABLE sessparts_20171116
PARTITION OF sessparts
FOR VALUES FROM ('2017-11-16') TO ('2017-11-17');

CREATE TABLE sessparts_20171117
PARTITION OF sessparts
FOR VALUES FROM ('2017-11-17') TO ('2017-11-18');


.OTHER PARTITIONS ELIDED FOR BREVITY.


CREATE INDEX ON sessparts_20171116 (CAST(session->>'requestIP' AS INET))
WHERE session->>'requestIP' != ''
AND session->>'requestIP' != '0';

CREATE INDEX ON sessparts_20171117 (CAST(session->>'requestIP' AS INET))
WHERE session->>'requestIP' != ''
AND session->>'requestIP' != '0';


.OTHER PARTITIONS ELIDED FOR BREVITY.


PREPARE fanalq1(INET, TIMESTAMPTZ, INTERVAL) AS
SELECT inetad, aid, count(aid), stddev(td)
FROM (
SELECT $1, aid,
lid,
tstmp,
EXTRACT(EPOCH FROM (tstmp - lag(tstmp) OVER (ORDER BY tstmp)))
FROM (
SELECT session->>'authID' AUTHID,
session->>'loginID' LOGINID,
to_timestamp(session->>'timestamp', 'MMDDHH24MISS') tt
FROM sessparts
WHERE sesstime(session) >= ($2 - $3)::DATE
AND sesstime(session) <= $2::DATE
AND to_timestamp(session->>'timestamp', 'MMDDHH24MISS')
<@ tstzrange($2 - $3, $2)
AND session->>'requestIP' != ''
AND session->>'requestIP' != '0'
AND (session->>'requestIP')::INET = $1
AND session->>'isAuthenticationSuccessful' = 'false'
) attempts(aid, lid, tstmp)
) tdiffs(inetad, aid, lid, ts, td)
GROUP BY aid, 1;

 Query --

sessions=# explain EXECUTE fanalq1('206.108.41.102', '2017-11-17 16:23:31-05', 
'60 seconds');



QUERY PLAN  

 
---
 GroupAggregate  (cost=1175004.52..1175004.72 rows=8 width=80)
   Group Key: tdiffs.aid, tdiffs.inetad
   ->  Sort  (cost=1175004.52..1175004.54 rows=8 width=72)
 Sort Key: tdiffs.aid, tdiffs.inetad
 ->  Subquery Scan on tdiffs  (cost=1175004.00..1175004.40 rows=8 
width=72)
   ->  WindowAgg  (cost=1175004.00..1175004.32 rows=8 width=112)
 ->  Sort  (cost=1175004.00..1175004.02 rows=8 width=1400)
   Sort Key: (to_timestamp((sessparts_20171110.session 
->> 'timestamp'::text), 'MMDDHH24MISS'::text))
   ->  Result  (cost=74.23..1175003.88 rows=8 
width=1400)
 ->  Append  (cost=74.23..1175003.76 rows=8 
width=1392)
   ->  Bitmap Heap Scan on 
sessparts_20171110  (cost=74.23..16077.23 rows=1 width=1380)
 Recheck Cond: session ->> 
'requestIP'::text))::inet = '206.108.41.102'::inet) AND ((session ->> 
'requestIP'::text) <> ''::text) AND ((sessio

Re: Possible hang in 10.1 with JSON query over partially indexed partitions

2018-01-18 Thread Tom Lane
Paul Jones  writes:
> I may have discovered a situation in 10.1 where EXECUTEing a PREPARED
> statement acting on JSON data in partitioned tables hangs in an
> infinite loop for a particular set of data.  Unfortunately, the data is
> proprietary, so I did the best I could below to describe what happened.

You haven't provided any real reason to think that this represents an
infinite loop and not just a really slow query.  The query's filter
conditions are complex enough to render the planner's estimates basically
works of fiction :-(, so I have no faith in the estimates suggesting that
only a relatively small number of rows will get processed.  It's
definitely making some progress, because your successive backtraces
show different blocks getting read.

regards, tom lane



warning for subquery that references a table but not its columns

2018-01-18 Thread Seamus Abshere
I almost got bit by this today:

=> select email from subscribed where email not in (select email from tracks);
 email
---
(0 rows)

=> select email from subscribed where email not in (select tracks.email from 
tracks);
ERROR:  column tracks.email does not exist
LINE 1: ... email from subscribed where email not in (select tracks.ema...

(the "tracks" table doesn't have an "email" column, so the first query is just 
meaningless)

Should there be a warning for the first query that you reference "tracks" in a 
subquery but don't use any columns from it?

--
Seamus Abshere, SCEA
https://www.faraday.io
https://github.com/seamusabshere
https://linkedin.com/in/seamusabshere



Re: warning for subquery that references a table but not its columns

2018-01-18 Thread Tom Lane
Seamus Abshere  writes:
> Should there be a warning for the first query that you reference "tracks" in 
> a subquery but don't use any columns from it?

Unfortunately, outer references in subqueries are perfectly legal per SQL
standard, and also often essential.

If we were to invent a "SET training_wheels = on" option, it would surely
include some warnings in this area, although I'm not sure exactly what the
triggering conditions ought to be.  But there are lots of ways to mess
yourself up in SQL; I don't think subqueries are that much worse than
some other gotchas.

regards, tom lane



Re: Call sql function in psql

2018-01-18 Thread Abhra Kar
Thanks

On Thu, Jan 18, 2018 at 10:01 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Jan 18, 2018 at 9:22 AM, Abhra Kar  wrote:
>
>>
>> I tried to modify like
>>
>>
>>
>> sess.createSQLQuery("select reset_all()")
>>
>
> ​This part is correct​
>
>
>
>> .executeUpdate();
>>
>>
>>
>> getting error -- org.postgresql.util.PSQLException: A result was
>> returned when none was expected.
>>
>>
>>
> ​You need execute the above the same way you would execute "SELECT * FROM
> tbl"; then just ignore the result set that is returned.
>
> David J.
> ​
>