Upgrading from 9.2.15 to 9.4.20

2019-01-18 Thread Michal
Im trying to upgrade PostgreSQL from 9.2.15 to 9.4.20
with this :
 /usr/pgsql-9.4/bin/pg_upgrade --old-bindir=/usr/bin
--new-bindir=/usr/pgsql-9.4/bin --old-datadir=/var/lib/pgsql/  data
--new-datadir=/var/lib/pgsql/9.4/data

Can anybody help me with this? I also try -p port parameter not helped. Both
databse instances are shut down.

that give me this output :

Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
Is the server running locally and accepting
connections on Unix domain socket "/run/postgresql/.s.PGSQL.50432"?


could not connect to old postmaster started with the command:
"/usr/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/data" -o
"-p 50432 -b  -c listen_addresses='' -c unix_so  cket_permissions=0700 -c
unix_socket_directory='/run/postgresql'" start
Failure, exiting




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Full text search with more than one word

2019-01-18 Thread Ritanjali Majihee
Hi Team,

 

We have around 20 millions of records in Postgresql database. One of the
column is html document stored as Text datatype. We want Use this column in
search functionality. We have converted html content to plaintext and
created tokens using  to_tsvector. We have implemented full text search on
this token column. 

 

1.   When we search with one word and logical operators, it returns
results as expected. Example: ("Jack" AND ("Jill" OR "Tom") )

2.   However, when we use two or more worded search strings with logical
operators, then it is not giving expected output. Example: for search string
("Jack Jill" OR "Jack Tom"), it is not searching for "Jack Jill" together
but if it finds those two words in one text it returns. Here, if the column
has "Jack Ryan Mark Jill", it will return the row but, we are expecting it
to return only when it is like  "Jack Jill Ryan Mark". We found that
proximity operator(<->) can be used in between the words. It returns the
results as expected but, it takes way more than expected time to get the
results; Performance is very poor with proximity operator. 

 

Can you please help us solve this issue. Let us know  if you need any
details from our side.

 

Thanks,

Ritanjali



Re: Full text search with more than one word

2019-01-18 Thread Magnus Hagander
On Fri, Jan 18, 2019 at 11:37 AM Ritanjali Majihee <
rmaj...@247headhunting.com> wrote:

> Hi Team,
>
>
>
> We have around 20 millions of records in Postgresql database. One of the
> column is html document stored as Text datatype. We want Use this column in
> search functionality. We have converted html content to plaintext and
> created tokens using  to_tsvector. We have implemented full text search on
> this token column.
>
>
>
> 1.   When we search with one word and logical operators, it returns
> results as expected. Example: (“Jack” AND (“Jill” OR “Tom”) )
>
> 2.   However, when we use two or more worded search strings with
> logical operators, then it is not giving expected output. Example: for
> search string ("*Jack Jill*" OR "Jack Tom"), it is not searching for
> “Jack Jill” together but if it finds those two words in one text it
> returns. Here, if the column has “*Jack* Ryan Mark *Jil*l”, it will
> return the row but, we are expecting it to return only when it is like  “*Jack
> Jill* Ryan Mark”. We found that proximity operator(<->) can be used in
> between the words. It returns the results as expected but, it takes way
> more than expected time to get the results; Performance is very poor with
> proximity operator.
>
>
>
> Can you please help us solve this issue. Let us know  if you need any
> details from our side.
>
>
>

It sounds like what you're looking for is phrase search, but it's hard to
figure out if you're actually using that and it's not working for you, or
if you're not using it. Please provide examples of the exact SQL that you
are using to populate your tsvectors and to do the actual searches.

//Magnus


Re: Upgrading from 9.2.15 to 9.4.20

2019-01-18 Thread Achilleas Mantzios

On 18/1/19 12:30 μ.μ., Michal wrote:

Im trying to upgrade PostgreSQL from 9.2.15 to 9.4.20
with this :
  /usr/pgsql-9.4/bin/pg_upgrade --old-bindir=/usr/bin
--new-bindir=/usr/pgsql-9.4/bin --old-datadir=/var/lib/pgsql/  data
--new-datadir=/var/lib/pgsql/9.4/data

Can anybody help me with this? I also try -p port parameter not helped. Both
databse instances are shut down.

that give me this output :

Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
 Is the server running locally and accepting
 connections on Unix domain socket "/run/postgresql/.s.PGSQL.50432"?


could not connect to old postmaster started with the command:
"/usr/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/var/lib/pgsql/data" -o
"-p 50432 -b  -c listen_addresses='' -c unix_so  cket_permissions=0700 -c
unix_socket_directory='/run/postgresql'" start
Failure, exiting

tweak your .pgpass a little bit, I remember from 9.3 to 10 I had to define :
/var/lib/pgsql:50432:*:postgres:

or just make local UNIX connections trusted in pg_hba.conf
also try to start the old postmaster by hand using the command shown and look 
to verify that you can locate the socket under /run/postgresql/.s.PGSQL.50432

make sure that the dir is there as well.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt




Re: pgbouncer

2019-01-18 Thread Fabio Pardi
Nicola,

My previous mail was maybe misleading because when i mentioned
'connections' I actually meant active connections to the db, as in:
doing a transaction.
In that case, yes, the connections are shared. But as soon as they
initiate a transaction, then they are not shared any longer and a new
pgbouncer connection is spawn to the database. Alternatively, the client
cannot be served and has to wait.

I hope my example reported here below will clarify the situation:


---
pgbouncer setting:

test_db   = host=... dbname=... user=... pool_size=2

[pgbouncer]
pool_mode = session


max_client_conn = 10
default_pool_size = 5
min_pool_size = 4
reserve_pool_size = 3
---

With above configuration, this is what 'show pools' on pgbouncer shows
when clients are not using pgbouncer:

database   | test_db
user   | xx
cl_active  | 0
cl_waiting | 0
sv_active  | 0
sv_idle| 0
sv_used| 2
sv_tested  | 0
sv_login   | 0
maxwait| 0
maxwait_us | 0
pool_mode  | session

2 servers used, as specified by pool_size

means: 2 TCP connections open to the db

 netstat --tcp -n |grep 5432 | | grep EST | wc -l
2


***

if you remove pool_size from the database string, then config becomes:

test_db   = host=... dbname=... user=...

---
[pgbouncer]
pool_mode = session


max_client_conn = 10
default_pool_size = 5
min_pool_size = 4
reserve_pool_size = 3
---

therefore:

database   | test_db
user   | xx
cl_active  | 0
cl_waiting | 0
sv_active  | 0
sv_idle| 0
sv_used| 4
sv_tested  | 0
sv_login   | 0
maxwait| 0
maxwait_us | 0
pool_mode  | session



netstat --tcp -n |grep EST | grep 5432 | wc -l
4


what we learned: pool_size overwrites min_pool_size

***


Now, I m restoring pool_size to 2 as we had in the beginning, and
checking how many connections we can open to pgbouncer..


We now have 2 connections always available.

i therefore open open 2 connections to the database. I only connect, and
leave them idle. This is confirmed by the

-[ RECORD 2 ]-
database   | test_db
user   | xx
cl_active  | 2
cl_waiting | 0
sv_active  | 0
sv_idle| 0
sv_used| 2
sv_tested  | 0
sv_login   | 0
maxwait| 0
maxwait_us | 0
pool_mode  | session


so, now, 2 server connections are open. If I open one more, then you are
sharing the connections, as David mentioned.

-[ RECORD 2 ]-
database   | test_db
user   | xx
cl_active  | 3
cl_waiting | 0
sv_active  | 0
sv_idle| 0
sv_used| 2
sv_tested  | 0
sv_login   | 0
maxwait| 0
maxwait_us | 0
pool_mode  | session

We can see 3 clients active, and 2 connections to the db.

and netstat will count 2 TCP connections only.


If instead you ask your clients to initiate a database transaction, for
instance typing 'BEGIN ;' then you will actually see what i meant in my
previous mail:

database   | test_db
user   | xx
cl_active  | 3
cl_waiting | 0
sv_active  | 3
sv_idle| 0
sv_used| 0
sv_tested  | 0
sv_login   | 0
maxwait| 0
maxwait_us | 0
pool_mode  | session

and netstat will show 3 open connections to your db


how many connections can you possibly open to the database?

pool_size + reserve_pool_size = 5 in my example.

how many connections can you make to pgbouncer? a total (globally,
including to pgbouncer db) of max_client_conn

that means, 'cl_active' can go up to 10 in my example, but as soon as
the 6th client initiates a transaction, it cannot be served and has to
wait for a connection to be freed.


regards,

fabio pardi





On 1/17/19 5:15 PM, Nicola Contu wrote:
>> If I set 3, and I tried to connect from 4 shells, I am still able to connect.
> 
> That would be the point - pgbouncer is sharing the 3 connections it
> keeps with PostgreSQL between the 4 client connections made to it.
> 
> Mmh, my pool_mode is per session. The 4 sessions were active, not doing
> any query, but connected to the shell.
> So that's what my doubt 
> 
> Il giorno gio 17 gen 2019 alle ore 16:32 David G. Johnston
> mailto:david.g.johns...@gmail.com>> ha scritto:
> 
> On Thu, Jan 17, 2019 at 8:18 AM Nicola Contu  > wrote:
> >
> > Hello,
> > I am a bit confused about the settings in pgbouncer
> >
> > What's exactly the pool_size?
> 
> Roughly, the number of open connections pgbouncer will keep to
> PostgreSQL.
> 
> > If I set 3, and I tried to connect from 4 shells, I am still able
> to connect.
> 
> That would be the point - pgbouncer is sharing the 3 connections it
> keeps with PostgreSQL between the 4 client connections made to it.
> 
> > Same thing for max_db_connections. I set this to 1 and I am able
> to connect from 2 shells.
> 
> Same as pool_size but basically a fail-safe since pools are
> per-user/per-database while this limit is per-database only.
> 
> > This is kind of confusing and I'm not really cleared reading the
> documentation.
> 
> For each settin

repmgr and automatic failover

2019-01-18 Thread ROS Didier
Hi
It seems that after an automatic failover by repmgr, it is possible to start 
the old primary instance. ( SPLIT BRAIN Risk) .
For example:
the case when we make a reboot of the primary server, so there is an automatic 
failover made by repmgr and in this situation, we end up with 2 primary 
instances.
Is it possible to get this with repmgr ?  can anyone affirm or deny this fact?

Thanks in advance

Best Regards
[cid:image002.png@01D14E0E.8515EB90]


Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD
Nanterre Picasso - E2 565D (aile nord-est)







Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


RE: Full text search with more than one word

2019-01-18 Thread Ritanjali Majihee
Hi Magnus,

 

exact  Search keyword is ("Marketing Coordinator" OR "Marketing Specialist" OR 
"Marketing Associate" ) AND "Creative Suite".

 

Where condition like below syntax we are using in Postgresql function

 

SELECT * from tablename where html_tokens  @@ to_tsquery( ' || 
quote_literal(ikeyword) || ')

 

>From search keyword we are getting result but not as expected .It is not 
>searching together "Marketing Coordinator" string, It is giving  result like 
>"Marketing and Coordinator".

Can you please help us solve this issue. 

 

Thanks,

Ritanjali

 

From: mag...@hagander.net [mailto:mag...@hagander.net] 
Sent: Friday, January 18, 2019 4:11 PM
To: Ritanjali Majihee
Cc: pgsql-general General; Ambiger, Mahantesh
Subject: Re: Full text search with more than one word

 

 

On Fri, Jan 18, 2019 at 11:37 AM Ritanjali Majihee  
wrote:

Hi Team,

 

We have around 20 millions of records in Postgresql database. One of the column 
is html document stored as Text datatype. We want Use this column in search 
functionality. We have converted html content to plaintext and created tokens 
using  to_tsvector. We have implemented full text search on this token column. 

 

1.   When we search with one word and logical operators, it returns results 
as expected. Example: (“Jack” AND (“Jill” OR “Tom”) )

2.   However, when we use two or more worded search strings with logical 
operators, then it is not giving expected output. Example: for search string 
("Jack Jill" OR "Jack Tom"), it is not searching for “Jack Jill” together but 
if it finds those two words in one text it returns. Here, if the column has 
“Jack Ryan Mark Jill”, it will return the row but, we are expecting it to 
return only when it is like  “Jack Jill Ryan Mark”. We found that proximity 
operator(<->) can be used in between the words. It returns the results as 
expected but, it takes way more than expected time to get the results; 
Performance is very poor with proximity operator. 

 

Can you please help us solve this issue. Let us know  if you need any details 
from our side.

 

 

It sounds like what you're looking for is phrase search, but it's hard to 
figure out if you're actually using that and it's not working for you, or if 
you're not using it. Please provide examples of the exact SQL that you are 
using to populate your tsvectors and to do the actual searches.

 

//Magnus

 



Re: Upgrading from 9.2.15 to 9.4.20

2019-01-18 Thread Michal
Is anywhere where IP adress of the server must be set? Because im doing
upgrade on test and this is copy of the VM of production (upgrading Jira
PostgreSQL)...Checks when Postgres 9.2.15 running gives me ok when i stop
both instances and try to upgrade i have error what i writed. 

Can you be please more specific what can i do?

directory  /run/postgresql when Postgres 9.2.15 running contains this : 
pg_upgrade_internal.log  pg_upgrade_server.log  pg_upgrade_utility.log

and how can i make local UNIX connections trusted in pg_hba.conf?
pg_hba.conf contains this : 

# TYPE  DATABASEUSERADDRESS METHOD

# "local" is for Unix domain socket connections only
#local   all all peer
# IPv4 local connections:
#hostall all 127.0.0.1/32ident
# IPv6 local connections:
#hostall all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication postgrespeer
#hostreplication postgres127.0.0.1/32ident
#hostreplication postgres::1/128 ident
local   all all  trust
hostall all 127.0.0.1/32trust

Thanks for help



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Full text search with more than one word

2019-01-18 Thread Magnus Hagander
(please don't top-post on the postgresql mailinglists)

On Fri, Jan 18, 2019 at 3:57 PM Ritanjali Majihee <
rmaj...@247headhunting.com> wrote:

> Hi Magnus,
>
>
>
> exact  Search keyword is ("Marketing Coordinator" OR "Marketing
> Specialist" OR "Marketing Associate" ) AND "Creative Suite".
>
>
>
> Where condition like below syntax we are using in Postgresql function
>
>
>
> SELECT * from tablename where html_tokens  @@ to_tsquery( ' ||
> quote_literal(ikeyword) || ')
>
>
>
> From search keyword we are getting result but not as expected .It is not
> searching together "Marketing Coordinator" string, It is giving  result
> like "Marketing and Coordinator".
>
> Can you please help us solve this issue.
>
>
>

You should look into phraseto_tsquery() for searching for phrases, and
websearchto_tsquery() for the kind of complete search with or and and in it
that you are looking for. to_tsquery doesn't do any parsing like that.

//Magnus


Re: Upgrading from 9.2.15 to 9.4.20

2019-01-18 Thread Adrian Klaver

On 1/18/19 3:50 AM, Michal wrote:

Is anywhere where IP adress of the server must be set? Because im doing
upgrade on test and this is copy of the VM of production (upgrading Jira
PostgreSQL)...Checks when Postgres 9.2.15 running gives me ok when i stop
both instances and try to upgrade i have error what i writed.


What OS.version are you using?

https://www.postgresql.org/docs/10/runtime-config-connection.html



Can you be please more specific what can i do?

directory  /run/postgresql when Postgres 9.2.15 running contains this :
pg_upgrade_internal.log  pg_upgrade_server.log  pg_upgrade_utility.log


Is this when it is running not in upgrade mode?

If so the I would try the command the pg_upgrade_server.log showed was 
used to start the server in upgrade mode(making sure the server is not 
running before you do the below):


/usr/bin/pg_ctl -w -l pg_upgrade_server.log -D /var/lib/pgsql/data -o 
"-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c 
unix_socket_directory='/run/postgresql'" start


Then look in /run/postgresql.

If the server does not start I would look in the system logs to see if 
there is a permissions problem preventing Postgres from writing to 
/run/postgresql.





and how can i make local UNIX connections trusted in pg_hba.conf?
pg_hba.conf contains this :


https://www.postgresql.org/docs/10/client-authentication.html



# TYPE  DATABASEUSERADDRESS METHOD



So:


# "local" is for Unix domain socket connections only


Turn this:


#local   all all peer


into:

local   all all trust

Then restart the server.


# IPv4 local connections:
#hostall all 127.0.0.1/32ident
# IPv6 local connections:
#hostall all ::1/128 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication postgrespeer
#hostreplication postgres127.0.0.1/32ident
#hostreplication postgres::1/128 ident
local   all all  trust
hostall all 127.0.0.1/32trust

Thanks for help



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Refining query statement [DONE]

2019-01-18 Thread Rich Shepard

On Tue, 15 Jan 2019, Rich Shepard wrote:


Working with my sales/client management system using psql I have a select
statement to identify contacts to be made. This statement works:


With much patient advice from Adrian, David, Thomas, and Ron I fixed the
schema and the query statement. To close this thread I post the query that
meets my goal and provide the information I need. Formatting this as a
report will be the job of SQLAlchemy and wxPython.

/* This query selects all contact information and notes from those people
   whose next_contact date is today or earlier; only active persons. */

/* Specify columns */
SELECT p.lname, p.fname, p.direct_phone, p.active, o.org_name,
   a.act_date, a.act_type, a.notes, a.next_contact, a.comment
/* Specify tables. */ 
FROM People AS p

 JOIN Organizations AS o ON o.org_id = p.org_id
 JOIN Activities AS a ON a.person_id = p.person_id
/* Specify rows */ 
WHERE p.active = TRUE AND

  a.next_contact <= 'today'::date
GROUP BY o.org_name, p.person_id, a.person_id, a.act_date, a.act_type,
  a.next_contact
ORDER BY p.person_id, a.next_contact DESC;

I really appreciate your help.

Best regards,

Rich



Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread George Woodring
We are running 9.6.8-10 on our five postgres clusters.
We have ~25 database per cluster.
We have two servers that update a table in each database.

During the update process we were running a pgpsql function that calculated
the difference in the new value with the previous.  This function would do
a SELECT to get the previous value from the table and do the calculation
before the UPDATE.  We also have an AFTER TRIGGER to save the row into a
logging table.

We recently changed the process to do a BEFORE TRIGGER to do the
calculation between OLD and NEW instead of the separate function with the
SELECT.

After doing this, CPU on our two client servers went crazy.  CPU on the
database servers look unchanged.  Rolling back this change fixed our client
CPU issue.

Anyone know of a reason why this change would affect the client?  I would
think that this would only affect the server side.  I could not see any
messages printed out anywhere in logs showing any type of error.

Thanks,
George Woodring
iGLASS Networks
www.iglass.net


Re: Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread Adrian Klaver

On 1/18/19 10:23 AM, George Woodring wrote:

We are running 9.6.8-10 on our five postgres clusters.
We have ~25 database per cluster.
We have two servers that update a table in each database.

During the update process we were running a pgpsql function that 
calculated the difference in the new value with the previous.  This 
function would do a SELECT to get the previous value from the table and 
do the calculation before the UPDATE.  We also have an AFTER TRIGGER to 
save the row into a logging table.


We recently changed the process to do a BEFORE TRIGGER to do the 
calculation between OLD and NEW instead of the separate function with 
the SELECT.


So what was the exact change?



After doing this, CPU on our two client servers went crazy.  CPU on the 
database servers look unchanged.  Rolling back this change fixed our 
client CPU issue.


What is the client server and what is it doing?



Anyone know of a reason why this change would affect the client?  I 
would think that this would only affect the server side.  I could not 
see any messages printed out anywhere in logs showing any type of error.


Thanks,
George Woodring
iGLASS Networks
www.iglass.net 



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread Tom Lane
Adrian Klaver  writes:
> On 1/18/19 10:23 AM, George Woodring wrote:
>> We recently changed the process to do a BEFORE TRIGGER to do the 
>> calculation between OLD and NEW instead of the separate function with 
>> the SELECT.

> So what was the exact change?

>> After doing this, CPU on our two client servers went crazy.  CPU on the 
>> database servers look unchanged.  Rolling back this change fixed our 
>> client CPU issue.

> What is the client server and what is it doing?

Indeed.  There's no direct way that messing with a trigger would have
caused extra CPU on the client side.  I speculate that the trigger
caused the data to look different in a way that your client app wasn't
expecting, causing it to do something funny --- maybe loop trying to
find a matching record, or something like that.  Or maybe removing the
step that did the calculation client-side had side effects you weren't
expecting --- what uses that value client-side, exactly?

regards, tom lane



Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-18 Thread Peter J. Holzer
On 2019-01-15 12:56:05 +0100, Daniel Verite wrote:
>   Mihalidesová Jana wrote:
> > nipjd=> select distinct encode(serializable_value, 'escape') from
> > alf_node_properties_zaloha where serializable_value is not null; 
> > 
> > encode
> > 
> > decode(E'aced00057e72002c6f72672e616c66726573636f2e736572766963652e636d722e76657273696f6e2e56657273696f6e54797065127872000e6a6176612e6c616e672e456e756d1278707400054d494e4f52',
> > 'hex') 
> 
> These contents are code, in the form of function calls, instead of data.

I doubt this (code in what language?). Since the column name is
"serializable_value" and there are what looks like Java class names
embedded in the blob ("org.alfresco.service.cmr.version.VersionType",
"java.lang.Enum", ...) these are most likely Java objects in
java serialization format.

The OP should be able to deserialize them with Java. I would expect the
JDBC to return a proper byte array (or maybe a ByteBuffer). That the
field looks different (hex in psql, raw in sqlplus) is most likely
a red herring - that's just for the benefit of humans, but humans can't
read binary data directly.

hp

-- 
   _  | Peter J. Holzer| we build much bigger, better disasters now
|_|_) || because we have much more sophisticated
| |   | h...@hjp.at | management tools.
__/   | http://www.hjp.at/ | -- Ross Anderson 


signature.asc
Description: PGP signature


Re: Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread George Woodring
Adrian Klaver  writes:
> What is the client server and what is it doing?

Our client creates a temp table
CREATE TEMP TABLE myraw(LIKE raw INCLUDING DEFAULTS)
We load data with a copy
COPY myraw (transferid, pollgrpid, date, data, rrdtypeid, error ) FROM
STDIN WITH DELIMITER '|'
We do some checking on the data in the temp table
SELECT rawdate, raw_maint(rawdate) AS checked FROM (SELECT
date_trunc('minute', date) AS rawdate FROM myraw GROUP BY rawdate) AS foo;
Then we save the data into the permanent location
SELECT transferid, pollgrpid, process_lastpoll(ROW(myraw.*)::raw) AS
processed FROM myraw ORDER BY transferid

Our change was in the process_lastpoll() function.  We went from
  IF cntr THEN
 oval := calc_last_cntr(ptype, pgid, ds, pdate, data);
  ELSE
 oval := data;
  END IF;

  -- Insert data into table
  -- RAISE DEBUG 'Process: dsnum=%, polltype=%, cntr?=%, pdate=%,
data=%, oval=%', dsnum, ptype, cntr, pdate, data, oval;
  UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate,
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;
  IF found THEN
 RETURN true;
  END IF;
  BEGIN
 INSERT INTO lastpoll (timeslot, pollgrpid, dsnum, polltype,
lasttime, lastval, outval, error) VALUES (ts, pgid, ds, ptype, pdate, data,
oval, err);
 RETURN true;
  EXCEPTION WHEN unique_violation THEN
 -- Don't do anything
  END;
to this
  IF NOT cntr THEN
 oval := data;
  END IF;

  -- Insert data into table
  -- RAISE DEBUG 'Process: dsnum=%, polltype=%, cntr?=%, pdate=%,
data=%, oval=%', dsnum, ptype, cntr, pdate, data, oval;
  UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate,
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;
  IF found THEN
 RETURN true;
  END IF;
  BEGIN
 INSERT INTO lastpoll (timeslot, pollgrpid, dsnum, polltype,
lasttime, lastval, outval, error) VALUES (ts, pgid, ds, ptype, pdate, data,
oval, err);
 RETURN true;
  EXCEPTION WHEN unique_violation THEN
 -- Don't do anything
  END;

The calc_last_cntr() was replaced with the trigger.
from:
CREATE OR REPLACE FUNCTION calc_last_cntr(ptype text, pgid int4, ds int4,
pdate timestamp with time zone, data double precision) RETURNS double
precision AS $$
   DECLARE
  mcurr RECORD;
  res double precision;
  dwindow int4;
   BEGIN

  SELECT lasttime, lastval INTO mcurr FROM lastpoll WHERE
pollgrpid=pgid AND dsnum=ds;
  IF mcurr IS NULL THEN
 -- RAISE DEBUG 'No previous data found for pgid=%', pgid;
 return NULL;
  END IF;

  -- Calculate the counter rate
  -- Check for div by 0
  dwindow := EXTRACT(EPOCH FROM (pdate - mcurr.lasttime));
  IF dwindow = 0 THEN
 RAISE WARNING '(%) Time difference is zero (% - %)',
current_database(), pdate, mcurr.lasttime;
 return NULL;
  END IF;

  res := (data - mcurr.lastval)/dwindow;
  IF res < 0 THEN
 -- RAISE DEBUG 'Counter loop for pgid=%', pgid;
 return data/dwindow;
  END IF;

  return res;
   END;
$$ LANGUAGE plpgsql;

to:
CREATE OR REPLACE FUNCTION cntr_lastpoll() RETURNS trigger AS $$
   DECLARE
  dwindow int4;
   BEGIN
  -- Log new value into history if has a date
  IF NEW.outval IS NULL THEN
 -- Calculate counter value for this DS.
 dwindow := EXTRACT(EPOCH FROM (NEW.lasttime - OLD.lasttime));
 IF dwindow = 0 THEN
RAISE WARNING '(%) Time difference is zero (% - %)',
current_database(), NEW.lasttime, OLD.lasttime;
return NEW;
 END IF;

 NEW.outval := (NEW.lastval - OLD.lastval)/dwindow;
 IF NEW.outval < 0 THEN
NEW.outval := NEW.lastval/dwindow;
 END IF;

  END IF;
  RETURN NEW;
   END;
$$ LANGUAGE plpgsql;

-- Create Trigger to calculate counter values if needed.
CREATE TRIGGER lastpoll_counter BEFORE UPDATE ON lastpoll
   FOR EACH ROW EXECUTE PROCEDURE cntr_lastpoll();

On Fri, Jan 18, 2019 at 2:01 PM Tom Lane  wrote:

>  what uses that value client-side, exactly?
>

The outval number is later read looking for values outside of a
thresholds.  I did not think about it affecting selects later in the code.
All in all everything worked fine, but since the client box is an AWS EC2
instance, it started eating all of our CPU credits.


[image: image.png]


iGLASS Networks
www.iglass.net



>
>


pgint.l -- PostgreSQL interface for PicoLisp

2019-01-18 Thread Abel Normand
Hello everyone.

I'm happy to announce that new PostgreSQL interface for PicoLisp is
released: https://gitlab.com/Abel-ze-Normand/pgint.l . It's simple yet
powerful to provide minimal toolkit to integrate your PicoLisp application
with PostgreSQL databases. This library currently is in state of
development so I'm open to your suggestions for future improvements.

Thanks.
-- 
Best regards, Nail.


Re: Why adding BEFORE TRIGGER affect client CPU?

2019-01-18 Thread Adrian Klaver

On 1/18/19 12:21 PM, George Woodring wrote:
Adrian Klaver > writes:

 > What is the client server and what is it doing?

Our client creates a temp table
CREATE TEMP TABLE myraw(LIKE raw INCLUDING DEFAULTS)
We load data with a copy
COPY myraw (transferid, pollgrpid, date, data, rrdtypeid, error ) FROM 
STDIN WITH DELIMITER '|'

We do some checking on the data in the temp table
SELECT rawdate, raw_maint(rawdate) AS checked FROM (SELECT 
date_trunc('minute', date) AS rawdate FROM myraw GROUP BY rawdate) AS foo;

Then we save the data into the permanent location
SELECT transferid, pollgrpid, process_lastpoll(ROW(myraw.*)::raw) AS 
processed FROM myraw ORDER BY transferid


Our change was in the process_lastpoll() function.  We went from
       IF cntr THEN
          oval := calc_last_cntr(ptype, pgid, ds, pdate, data);
       ELSE
          oval := data;
       END IF;

       -- Insert data into table
       -- RAISE DEBUG 'Process: dsnum=%, polltype=%, cntr?=%, pdate=%, 
data=%, oval=%', dsnum, ptype, cntr, pdate, data, oval;
       UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate, 
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;

       IF found THEN
          RETURN true;
       END IF;
       BEGIN
          INSERT INTO lastpoll (timeslot, pollgrpid, dsnum, polltype, 
lasttime, lastval, outval, error) VALUES (ts, pgid, ds, ptype, pdate, 
data, oval, err);

          RETURN true;
       EXCEPTION WHEN unique_violation THEN
          -- Don't do anything
       END;
to this
       IF NOT cntr THEN
          oval := data;
       END IF;

       -- Insert data into table
       -- RAISE DEBUG 'Process: dsnum=%, polltype=%, cntr?=%, pdate=%, 
data=%, oval=%', dsnum, ptype, cntr, pdate, data, oval;
       UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate, 
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;

       IF found THEN
          RETURN true;
       END IF;
       BEGIN
          INSERT INTO lastpoll (timeslot, pollgrpid, dsnum, polltype, 
lasttime, lastval, outval, error) VALUES (ts, pgid, ds, ptype, pdate, 
data, oval, err);

          RETURN true;
       EXCEPTION WHEN unique_violation THEN
          -- Don't do anything
       END;

The calc_last_cntr() was replaced with the trigger.
from:
CREATE OR REPLACE FUNCTION calc_last_cntr(ptype text, pgid int4, ds 
int4, pdate timestamp with time zone, data double precision) RETURNS 
double precision AS $$

    DECLARE
       mcurr RECORD;
       res double precision;
       dwindow int4;
    BEGIN
       SELECT lasttime, lastval INTO mcurr FROM lastpoll WHERE 
pollgrpid=pgid AND dsnum=ds;

       IF mcurr IS NULL THEN
          -- RAISE DEBUG 'No previous data found for pgid=%', pgid;
          return NULL;
       END IF;

       -- Calculate the counter rate
       -- Check for div by 0
       dwindow := EXTRACT(EPOCH FROM (pdate - mcurr.lasttime));
       IF dwindow = 0 THEN
          RAISE WARNING '(%) Time difference is zero (% - %)', 
current_database(), pdate, mcurr.lasttime;

          return NULL;
       END IF;

       res := (data - mcurr.lastval)/dwindow;
       IF res < 0 THEN
          -- RAISE DEBUG 'Counter loop for pgid=%', pgid;
          return data/dwindow;
       END IF;

       return res;
    END;
$$ LANGUAGE plpgsql;

to:
CREATE OR REPLACE FUNCTION cntr_lastpoll() RETURNS trigger AS $$
    DECLARE
       dwindow int4;
    BEGIN
       -- Log new value into history if has a date
       IF NEW.outval IS NULL THEN
          -- Calculate counter value for this DS.
          dwindow := EXTRACT(EPOCH FROM (NEW.lasttime - OLD.lasttime));
          IF dwindow = 0 THEN
             RAISE WARNING '(%) Time difference is zero (% - %)', 
current_database(), NEW.lasttime, OLD.lasttime;

             return NEW;
          END IF;

          NEW.outval := (NEW.lastval - OLD.lastval)/dwindow;
          IF NEW.outval < 0 THEN
             NEW.outval := NEW.lastval/dwindow;
          END IF;

       END IF;
       RETURN NEW;
    END;
$$ LANGUAGE plpgsql;

-- Create Trigger to calculate counter values if needed.
CREATE TRIGGER lastpoll_counter BEFORE UPDATE ON lastpoll
    FOR EACH ROW EXECUTE PROCEDURE cntr_lastpoll();

On Fri, Jan 18, 2019 at 2:01 PM Tom Lane > wrote:


  what uses that value client-side, exactly?



Not claiming to understand all that is going on above.

What I think I see:

1) New  process_lastpoll() has:
UPDATE lastpoll SET polltype=ptype, timeslot=ts, lasttime=pdate,
lastval=data, outval=oval, error=err WHERE pollgrpid=pgid AND dsnum=ds;

2) calc_last_cntr() is now function for trigger:
CREATE TRIGGER lastpoll_counter BEFORE UPDATE ON lastpoll
   FOR EACH ROW EXECUTE PROCEDURE cntr_lastpoll();

Seems to me you are double clutching on the outval value. You UPDATE it 
in 1) then recalculate it again in 2).


As said earlier I do not fully understand what is going on, still seems 
to me process