Re: selecting timestamp

2018-02-27 Thread Adrian Klaver

On 02/27/2018 12:16 PM, chris wrote:


What is the best way of selecting current timestamp in UTC?

SELECT
CURRENT_TIMESTAMP   as ct1


Well the above would depend on your database having its time zone
set to UTC.


,timezone('UTC',CURRENT_TIMESTAMP) as ct2
,timezone('utc',now()) as ct3
,CURRENT_TIMESTAMP at time zone 'UTC' as ct4
,NOW() at time zone 'utc' as ct5


The rest would do the job. The question becomes where are you planning 
on calling these and what is tine you are looking for? See the below for 
more information:


https://www.postgresql.org/docs/10/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT

Basically the functions act differently in transactions, which will 
affect the time returned.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Enforce primary key on every table during dev?

2018-02-28 Thread Adrian Klaver

On 02/28/2018 05:52 AM, John McKown wrote:
On Wed, Feb 28, 2018 at 7:34 AM, Jeremy Finzel <mailto:finz...@gmail.com>>wrote:


We want to enforce a policy, partly just to protect those who might
forget, for every table in a particular schema to have a primary
key.  This can't be done with event triggers as far as I can see,
because it is quite legitimate to do:

BEGIN;
CREATE TABLE foo (id int);
ALTER TABLE foo ADD PRIMARY KEY (id);
COMMIT;

It would be nice to have some kind of "deferrable event trigger" or
some way to enforce that no transaction commits which added a table
without a primary key.

Any ideas?

Thanks,
Jeremy



​What stops somebody from doing:

CREATE TABLE foo (filler text primary key default null, realcol1 int, 
realcol2 text);


And then just never bother to ever insert anything into the column 
FILLER? It fulfills your stated requirement​ of every table having a 


Then you would get this:

test=# CREATE TABLE foo (filler text primary key default null, realcol1 
int, realcol2 text);

CREATE TABLE
test=# insert into  foo (realcol1, realcol2) values (1, 'test');
ERROR:  null value in column "filler" violates not-null constraint
DETAIL:  Failing row contains (null, 1, test).

primary key. Of course, you could amend the policy to say a "non-NULL 
primary key".




--
I have a theory that it's impossible to prove anything, but I can't 
prove it.


Maranatha! <><
John McKown



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Version upgrade: is restoring the postgres database needed?

2018-03-01 Thread Adrian Klaver

On 03/01/2018 09:22 AM, Ron Johnson wrote:

On 03/01/2018 11:03 AM, Melvin Davidson wrote:






*Current*
postgres=# \l
 List of databases
    Name |  Owner   | Encoding | Collate   |    Ctype    |  
Access privileges


-+--+--+-+-+---
CSSCAT_STI   | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
CSS=CTc/CSS  +
 |  | | | |
=Tc/CSS  +
 |  | | | |
app_user=CTc/CSS
CSSCAT_STIB  | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
CSS=CTc/CSS  +
 |  | | | |
=Tc/CSS  +
 |  | | | |
app_user=CTc/CSS
CSSCAT_STIC  | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
CSS=CTc/CSS  +
 |  | | | |
=Tc/CSS  +
 |  | | | |
app_user=CTc/CSS

*Newly restored*
postgres=# \l
 List of databases
    Name |  Owner   | Encoding | Collate   |    Ctype    |  
Access privileges


-+--+--+-+-+---
CSSCAT_STIB | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
CSSCAT_STIC | CSS  | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres    | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |


-- 
Angular momentum makes the world go 'round.



*>$ pg_dump -Fc PROD > PROD.pgdump
>$ pg_dump --globals-only postgres > globals.sql
>$ pg_dump -Fc postgres > postgres.pgdump

*
*The last I looked, pg_dump does not have a "--globals-only"
*
*Did you mean?
$ pg_dump -Fc PROD > PROD.pgdump

$ pg_dumpall --globals-only postgres > globals.sql
OR
$ pg_dumpall -g > globals.sql

$ pg_dump -Fc postgres > postgres.pgdump*


Hmmm.  I just looked at the script, and it says:

$ pg_dumpall --schema-only > globals.sql

That's not good.


Well it would dump the globals, but also the schema definitions for all 
the objects in the cluster. Though at this point we are only half way 
through the process. What is you restore procedure?





--
Angular momentum makes the world go 'round.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Version upgrade: is restoring the postgres database needed?

2018-03-01 Thread Adrian Klaver

On 03/01/2018 09:59 AM, Ron Johnson wrote:

On 03/01/2018 11:46 AM, Adrian Klaver wrote:
[snip]

Hmmm.  I just looked at the script, and it says:

$ pg_dumpall --schema-only > globals.sql

That's not good.


Well it would dump the globals, but also the schema definitions for 
all the objects in the cluster. Though at this point we are only half 
way through the process. What is you restore procedure?


$ psql < globals.sql


Assuming globals.sql was creating your previously shown command:

pg_dumpall --schema-only > globals.sql

Then the above added the globals to the cluster and installed the schema 
objects(but not data) for the cluster.


$ pg_restore --clean --create --if-exists --exit-on-error --jobs=2 


When you do --clean and --create you DROP the database from the cluster 
before it is restored. A quick test here shows that the database 
permissions are not restored in that case. This is something that us 
contained in the globals.


I fix for this I believe is covered in this commit:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b3f8401205afdaf63cb20dc316d44644c933d5a1


${SRC}/${DB}.pgdump





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Enforce primary key on every table during dev?

2018-03-01 Thread Adrian Klaver

On 03/01/2018 01:03 PM, Ron Johnson wrote:

On 03/01/2018 02:32 PM, David G. Johnston wrote:


There's always the "account number", which is usually synthetic. Credit 
Card numbers are also synthetic.  


Actually, no:

https://en.wikipedia.org/wiki/Payment_card_number

There is a method to the madness, not just random issuance of numbers. 
It was made it relatively easy for folks to generate numbers. Hence the 
addition of CSC codes.


ICD numbers are (relatively)

synthetic, too.

But that doesn't mean we have to use them willy-nilly everywhere.


--
Angular momentum makes the world go 'round.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Enforce primary key on every table during dev?

2018-03-01 Thread Adrian Klaver

On 03/01/2018 01:26 PM, Ron Johnson wrote:

On 03/01/2018 03:14 PM, Adrian Klaver wrote:

On 03/01/2018 01:03 PM, Ron Johnson wrote:

On 03/01/2018 02:32 PM, David G. Johnston wrote:


There's always the "account number", which is usually synthetic. 
Credit Card numbers are also synthetic. 


Actually, no:

https://en.wikipedia.org/wiki/Payment_card_number

There is a method to the madness, not just random issuance of numbers. 
It was made it relatively easy for folks to *generate numbers*. Hence 
the addition of CSC codes.


Right.  And how do the issuers generate the individual account 
identifier within their IIN ranges?


Who knows, that is their business, though there is nothing to say they 
don't use some sort of internal 'natural' logic. It has been awhile 
since we have gone down this rabbit hole on this list, mostly because it 
is an issue that is usually left at 'we agree to disagree'. Though the 
thing that always strikes me is the assumption that a number/surrogate 
key is less 'natural' then some other sort of tag or combination of 
tags. Because that is what PK's are, a tag to identify a record.






ICD numbers are (relatively)

synthetic, too.

But that doesn't mean we have to use them willy-nilly everywhere.


--
Angular momentum makes the world go 'round.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How can I include sql file in pgTAP unittest?

2018-03-02 Thread Adrian Klaver

On 03/02/2018 01:17 AM, Stéphane Klein wrote:

Hi,

context: I would like to write UnitTest to test pgSQL triggers which use 
postgres_fdw extension.
I use pgTAP <http://pgtap.org/> to write this UnitTest (I use this 
Docker environment poc-postgresql-pgTAP 
<https://github.com/harobed/poc-postgresql-pgTAP>).


All works perfectly with this test file:

BEGIN;
   SELECT plan(1);

   CREATE EXTENSION IF NOT EXISTS postgres_fdw;
   DROP SERVER IF EXISTS db2 CASCADE;
   CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'db2', dbname 'db2');
   CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password 
'password');


   CREATE SCHEMA IF NOT EXISTS db2;

   IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;

   SELECT ok(
     (SELECT COUNT(host_id) FROM db2.hosts) = 1,
     'foobar'
   );

   -- ;
ROLLBACK;

Now, I would like to extract db2 initialization in separate file 
"/test/init.sql" with this content:


   CREATE EXTENSION IF NOT EXISTS postgres_fdw;
   DROP SERVER IF EXISTS db2 CASCADE;
   CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'db2', dbname 'db2');
   CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password 
'password');


   CREATE SCHEMA IF NOT EXISTS db2;

   IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;

Now, my test file is:

BEGIN;
   SELECT plan(1);

   \i /test/init.sql

   SELECT ok(
     (SELECT COUNT(host_id) FROM db2.hosts) = 1,
     'foobar'
   );

In log I see that "init.sql" file is loaded with success:

Running tests: /test/*.sql -v
/test/init.sql ... No subtests run

But I have this error:

ERROR:  user mapping not found for "db2"

Question: where is my mistake? How can I include some sql file in my test?


It is early morning here and I may be missing something, but I do not 
see where you actually create server db2. I do see you creating server kea.




Best regards,
Stéphane
--
Stéphane Klein <mailto:cont...@stephane-klein.info>>

blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver
;: 7, "value": 2, "letter": "П"}, {"col": 6, "row": 7, "value": 
0, "letter": "Л"}, {"col": 7, "row": 7, "value": 3, "letter": "Я"}]');

 jsonb_array_length

  5

test=# select jsonb_array_length( '[{"col": 3, "row": 8, "value": 2, 
"letter": "В"}, {"col": 3, "row": 9, "value": 1, "letter": "И"}, {"col": 
3, "row": 10, "value": 2, "letter": "Т"}, {"col": 3, "row": 11, "value": 
2, "letter": "К"}, {"col": 3, "row": 12, "value": 1, "letter": "А"}]');

 jsonb_array_length

  5

test=# select jsonb_array_length('[{"col": 0, "row": 10, "value": 2, 
"letter": "С"}, {"col": 1, "row": 10, "value": 2, "letter": "М"}, 
{"col": 2, "row": 10, "value": 1, "letter": "О"}, {"col": 4, "row": 10, 
"value": 2, "letter": "Р"}]');
 jsonb_array_length 



 



  4

it works.

The error message would suggest there is data in tiles which is not an 
array but a scalar value.





Regards
Alex



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:14 AM, Alexander Farber wrote:

Hi Adrian, thank you for the reply -




#  select mid, jsonb_array_length(tiles) from words_moves where
gid=609;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579

What am I doing wrong here please?


Are you sure all the values in tiles are correctly formatted because
when I use jsonb_array_length with the provided data:

test=# select jsonb_array_length( '[{"col": 3, "row": 7, "value": 2,
"letter": "С"}, {"col": 4, "row": 7, "value": 1, "letter": "О"},
{"col": 5, "row": 7, "value": 2, "letter": "П"}, {"col": 6, "row":
7, "value": 0, "letter": "Л"}, {"col": 7, "row": 7, "value": 3,
"letter": "Я"}]');
  jsonb_array_length

                   5


I fill that table with the following stored function (please pardon the 
huge listing):


The little gray cells are not awake enough to work through the below:) 
If it where me I would first confirm there was malformed data by looking 
at the data itself. If there are not that many records for gid = 609 
maybe a simple select  of tiles would be sufficient. Otherwise maybe a 
simple plpgsql function that loops through the records applying 
jsonb_array_length and raising a notice on the error. In any case the 
point is to identify the presence of malformed data and if present the 
nature of the malformation. That would help reverse engineer any issues 
with below.




CREATE OR REPLACE FUNCTION words_play_game(
     in_uid   integer,
     in_gid   integer,
     in_tiles jsonb
     ) RETURNS table (
     out_uid  integer, -- the player to be notified
     out_fcm  text,
     out_apns text,
     out_adm  text,
     out_body text
     ) AS




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:42 AM, Alexander Farber wrote:
Hi Adrian, I 100% agree that nobody except me should debug my huge 
stored function, but if you look at my PostgreSQL 10.3 log -




Which proves what has already been proven, that at least some of the 
data is correct. The issue is data that is not correct as evidenced by 
the error message:


select mid, jsonb_array_length(tiles) from words_moves where gid=609;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579

This is not going to get solved until you identify the 'bad' tiles data.


2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.110 ms  parse 
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns, 
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, 
$2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] LOG: duration: 0.058 ms  bind 
: SELECT out_uid  AS uid, out_fcm  AS fcm,  out_apns AS apns, 
out_adm  AS adm,  out_body AS body FROM words_play_game($1::int, 
$2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 = 
'609', $3 = 
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'
2018-03-02 15:30:33.646 CET [16693] LOG:  execute : SELECT 
out_uid AS uid,  out_fcm  AS fcm,  out_apns AS apns, out_adm  AS adm,  
out_body AS body  FROM words_play_game($1::int, $2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 = 
'609', $3 = 
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'


I just pass as the 3rd argument in_tiles to my stored function: 
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'


and then take the in_tiles and store it unchanged in the words_moves table:

     INSERT INTO words_moves (
     action,
     gid,
     uid,
     played,
     tiles
     ) VALUES (
     'play',
     in_gid,
     in_uid,
     CURRENT_TIMESTAMP,
     in_tiles
     ) RETURNING mid INTO STRICT _mid;

Does anybody happen to see what could I do wrong there?

Thank you for any hints
Alex

P.S: Here my stored fuinction: 
https://gist.github.com/afarber/88a832a1b90a8940764ad69b2b761914
     Here my table: 
https://gist.github.com/afarber/06cc37114ff8dd14f05077f312904361

     And here is how I call the stored function from Java:

    String SQL_PLAY_GAME    =
     "SELECT " +
     "out_uid  AS uid,  " +
     "out_fcm  AS fcm,  " +
     "out_apns AS apns, " +
     "out_adm  AS adm,  " +
     "out_body AS body  " +
     "FROM words_play_game(?::int, ?::int, ?::jsonb)";

     private void handlePlay(int gid, String tiles) throws SQLException, 
IOException {

     LOG.info("handlePlay: {} -> {} {}", mUid, gid, tiles);
     try (Connection db = DriverManager.getConnection(DATABASE_URL, 
DATABASE_USER, DATABASE_PASS);
     PreparedStatement st = 
db.prepareStatement(SQL_PLAY_GAME)) {

     st.setInt(1, mUid);
     st.setInt(2, gid);
     st.setString(3, tiles);
     runPlayerAction(st, gid);
     }
     }



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:42 AM, Alexander Farber wrote:
Hi Adrian, I 100% agree that nobody except me should debug my huge 
stored function, but if you look at my PostgreSQL 10.3 log -




Another thought, in :

2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind 
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns, 
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, 
$2::int, $3::jsonb)


where is $3::jsonb coming from?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How can I include sql file in pgTAP unittest?

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:46 AM, Stéphane Klein wrote:




Sorry, it is not kea but db2 in my example. I did a mistake when I 
replace the db name in my example.


You need to read:

   CREATE EXTENSION IF NOT EXISTS postgres_fdw;
   DROP SERVER IF EXISTS db2 CASCADE;
   CREATE SERVER db2 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'db2', dbname 'db2');
   CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password 
'password');


   CREATE SCHEMA IF NOT EXISTS db2;

   IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;


I don't know much about pgTap, but my first instinct would be to change:

BEGIN;
  SELECT plan(1);

  \i /test/init.sql

  SELECT ok(
(SELECT COUNT(host_id) FROM db2.hosts) = 1,
'foobar'
  );

to:


BEGIN;

  \i /test/init.sql

  SELECT plan(1);

  SELECT ok(
(SELECT COUNT(host_id) FROM db2.hosts) = 1,
'foobar'
  );


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 10:04 AM, David G. Johnston wrote:
On Friday, March 2, 2018, Adrian Klaver <mailto:adrian.kla...@aklaver.com>> wrote:


On 03/02/2018 06:42 AM, Alexander Farber wrote:



2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS
apns, out_adm  AS adm,  out_body AS body  FROM
words_play_game($1::int, $2::int, $3::jsonb)

where is $3::jsonb coming from?


Java prepared statement I think, using setString.


I was thinking more about the ultimate source of the data. The 
words_play_game() function, AFAICT, just passes the jsonb from input 
into the word_moves table. If that is the case the problem may occur 
further upstream where the jsonb array is actually built.




Not at computer right now but what does:

'[1,2,3]'::text::jsonb result in compared to '[1,2,3]'::jsonb ?

Because the system think your array-looking string is actually just a 
scalar that happens to have a leading [ and a trailing ]


David J.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 10:43 AM, Alexander Farber wrote:

# select * from words_moves where gid=609 limit 3;
-[ RECORD 1 
]

mid    | 1040
action | play
gid    | 609
uid    | 1192
played | 2018-03-02 10:13:57.943876+01
tiles  | [{"col": 3, "row": 7, "value": 2, "letter": "С"}, {"col": 4, 
"row": 7, "value": 1, "letter": "О"}, {"col": 5, "row": 7, "value": 2, 
"letter": "П"}, {"col": 6, "row": 7, "value": 0, "letter": "Л"}, {"col": 
7, "row": 7, "value": 3, "letter": "Я"}]

score  | 10
-[ RECORD 2 
]

mid    | 1041
action | play
gid    | 609
uid    | 7
played | 2018-03-02 10:56:58.72503+01
tiles  | [{"col": 3, "row": 8, "value": 2, "letter": "В"}, {"col": 3, 
"row": 9, "value": 1, "letter": "И"}, {"col": 3, "row": 10, "value": 2, 
"letter": "Т"}, {"col": 3, "row": 11, "value": 2, "letter": "К"}, 
{"col": 3, "row": 12, "value": 1, "letter": "А"}]

score  | 14
-[ RECORD 3 
]

mid    | 1043
action | play
gid    | 609
uid    | 1192
played | 2018-03-02 11:03:58.614094+01
tiles  | [{"col": 0, "row": 10, "value": 2, "letter": "С"}, {"col": 1, 
"row": 10, "value": 2, "letter": "М"}, {"col": 2, "row": 10, "value": 1, 
"letter": "О"}, {"col": 4, "row": 10, "value": 2, "letter": "Р"}]

score  | 13

#  select column_name, data_type from information_schema.columns where 
table_name='words_moves';

  column_name |        data_type
-+--
  mid         | bigint
  action      | text
  gid         | integer
  uid         | integer
  played      | timestamp with time zone
  tiles       | jsonb
  score       | integer
(7 rows)

#  select jsonb_array_length(tiles) from words_moves where gid=609 limit 3;
  jsonb_array_length

                   5
                   5
                   4
(3 rows)

BUT:

#  select jsonb_array_length(tiles) from words_moves where gid=609 ;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579


Which means only some data is bad, but how to find it please?


What is? :

select count(*) from words_moves where gid=609;

A simplistic approach would be:

select mid, jsonb_array_length(tiles) from words_moves where gid=609 
order by mid limit x;


where you  increment x until you trigger the error.

A more sophisticated approach would be to use plpgsql EXCEPTION handling:

https://www.postgresql.org/docs/10/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

and create a function that loops through:

select jsonb_array_length(tiles) from words_moves where gid=609 ;

and RAISES a NOTICE for each incorrect value along with its mid value.










--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 10:58 AM, Alexander Farber wrote:

I see, thank you for your comments, David and Adrian.

In the "tiles" column actually save either the JSON array of tiles - 
when the user plays them


Or a string (which is jsonb too) concatenated of letters - when the user 
swaps the letters.


Maybe I should rethink my table structure (I just want to "log" all 
plays, swaps, skips, resigns in the words_moves table)...


Or make the string the value of an array:

[{"swap": "ТСНЦУЭ"}]

so you are not changing the inner JSON in the field.



Or maybe I should always check for the "action" column first (acts as 
enum) - before accessing "tiles" column


Regrads
Alex




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: SQL syntax

2018-03-02 Thread Adrian Klaver

On 03/02/2018 04:36 PM, Dale Seaburg wrote:
My mind is drawing a blank.  Not sure where to go to find the answer. 
Here is the statement in C#:


sSQL = "SELECT \"Image_Filename\" FROM \"Instruments\" WHERE ";

To finish off the WHERE clause, I need to look at the first 2 letters, 
like "D:".  My question is how do I specify in the WHERE clause, to look 
at the first 2 characters in the Image_Filename column?  What is the 
correct SQL syntax for looking at just a portion of a column?


SELECT "Image_Filename" FROM "Instruments" WHERE "ImageFilename" LIKE 'D:%';

If you want case insensitive then ILIKE. For more info:
https://www.postgresql.org/docs/10/static/functions-matching.html#FUNCTIONS-LIKE



I feel so dumb...  This can't be that hard, but I am exhausted and 
running out of ideas.  I need someone to give me a boost!  ;-)


Thanks,
Dale.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: momjian.us is down?

2018-03-04 Thread Adrian Klaver

On 03/04/2018 05:53 PM, Igal wrote:
I am trying to connect to http://momjian.us/ but I get connection 
timeout (from Firefox):


Works for me from FF. Maybe a caching issue? Have you tried:

Ctrl + Shift + R



The connection has timed out

The server at momjian.us is taking too long to respond.

     The site could be temporarily unavailable or too busy. Try again in 
a few moments.
     If you are unable to load any pages, check your computer’s network 
connection.
     If your computer or network is protected by a firewall or proxy, 
make sure that Firefox is permitted to access the Web.


Same from curl:

$ curl http://momjian.us/
curl: (7) Failed to connect to momjian.us port 80: Connection timed out


Igal







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Pgadmin4 apt packages.

2018-03-04 Thread Adrian Klaver

On 03/04/2018 07:47 PM, Tim Uckun wrote:
Where are the pgadmin4 apt packages? I have set up the PPA as specified 
on the download page but I see no pgadmin4 packages in the list.


I am running ubuntu 17-10 but I had to set up the zesty PPA because 
there are apt repos for artfful.





https://wiki.postgresql.org/wiki/Apt

"2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed it 
from their mirrors "


At the below I do see packages for 18.04:

http://apt.postgresql.org/pub/repos/apt/pool/main/p/pgadmin4/


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: What is wrong with my pgadmin?

2018-03-05 Thread Adrian Klaver

On 03/05/2018 06:05 AM, Łukasz Jarych wrote:

Hi Guys,

what is wrong with my pgAdmin?

I am creating wrong query:

select 1/0 but still this is executed!


Well it will be executed, but it should throw an error. Can you confirm 
that running select 1/0; in psql does indeed throw an error:


test=# select 1/0;
ERROR:  division by zero

Also:

1) What version of pgAdmin4 are you running?

2) What version of Postgres is it running against?





why?

Best wishes,
Jacek





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: What is wrong with my pgadmin?

2018-03-05 Thread Adrian Klaver

On 03/05/2018 08:50 PM, Łukasz Jarych wrote:

Anyone pleasE?


This has already been answered:

https://www.postgresql.org/message-id/CAKKotZQj_vupOYZKVXVfU_sjEADeJYMSftCNcV3dnY4eHqO64g%40mail.gmail.com



Jacek

2018-03-05 15:05 GMT+01:00 Łukasz Jarych <mailto:jarys...@gmail.com>>:


Hi Guys,

what is wrong with my pgAdmin?

I am creating wrong query:

select 1/0 but still this is executed!



why?

Best wishes,
Jacek






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: org.postgresql.util.PSQLException: Error could not open file "base/": No such file or directory

2018-03-05 Thread Adrian Klaver

On 03/05/2018 05:54 PM, Raghavendra Rao J S V wrote:

Hi,

Few of the tables and indexes are bloated though auto vacuum has enables.

Two reclaim the space vacuum, I have ran vacuum full on the larger 
tables and also performed reindex on the indexes which are in larger 
size. Now the size of the database is in control.


After perfoming the vacuum full and reindexing on larger tables, I am 
facing below error.


*org.postgresql.util.PSQLException: Error could not open file "base/": 
No such file or directory*


Assuming the database is up and running then base/ is actually there:

https://www.postgresql.org/docs/10/static/storage-file-layout.html


When I know a file or directory exists and I see an error like:

'Error could not open file "base/": No such file or directory '

then I generally check permissions on said file/directory. In particular 
does the app throwing the error having the permissions to 'see' the 
file/directory?




Please guide me how to resolve the above error and let me know does this 
has any relation with vacumm full or reindexing operation which are 
performed by me.


--
Regards,
Raghavendra Rao J S V




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: save query as sql file

2018-03-07 Thread Adrian Klaver

On 03/06/2018 11:07 PM, Łukasz Jarych wrote:

Hello,

I ma trying to save query as sql file in pg_admin4 but file --> save as 
not exists like here:


https://www.youtube.com/watch?v=L4KJ_Kpymh4

where can i do it ?



Well according to manual:

https://www.pgadmin.org/docs/pgadmin4/2.x/query_tool.html

Query tool toolbar


Click the Save icon to perform a quick-save of a previously saved query, 
or to access the Save menu:


Select Save to save the selected content of the SQL Editor 
panel in a file.
Select Save As to open a new browser dialog and specify a new 
location to which to save the selected content of the SQL Editor panel.





Best,
Jacek



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: What is the meaning of pg_restore output?

2018-03-08 Thread Adrian Klaver

On 03/08/2018 01:49 AM, Alexandru Lazarev wrote:

Hi PG community,
I have questions about pg_restore output.

At some point of time in pg_restore output was such phrase 
"*/pg_restore: setting owner and privileges for /*", then 
it disappeared, it seems, when upgraded from PG 9.6.2 to 9.6.6

Other phrases "pg_restore: processing /*item */3615 DEFAULT id
pg_restore: creating DEFAULT "public.id <http://public.id>"
are observed if pg_restore is executed with flag -j N (where N > 1)
and aren't present when -j 1 (or without -j)

So main questions are:
1. What is the meaning of this phrases?
2. Are they documented somewhere?
3. Is it possible that output change from version to version?


I do not see any change in the release notes:
https://www.postgresql.org/docs/9.6/static/release.html

I would look at what at client_min_messages (enum) in:
https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html

It would also helpful to see the complete pg_dump and pg_restore 
commands you are working with.




Why I am asking?
I saw a script which does pg_restore and grep some of this phrases as 
SUCCESS indicator that pg_restore passed OK (+ something like grep -iv 
"error").
script doesn't use pg_restore exit code because, as I understood from 
the authors, in the past were situations that pg_restore returned 0 code 
but didn't restore of db.


Any answers, hints are welcome :).
Thanks

<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon> 
	Virus-free. www.avast.com 
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link> 



<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: What is the meaning of pg_restore output?

2018-03-08 Thread Adrian Klaver

On 03/08/2018 09:06 AM, Alexandru Lazarev wrote:

client_min_messages GUC didn't change

Commands are:
pg_dump --host 127.0.0.1 --port 5432 --username postgres --format custom 
--blobs --file /tmp/postgresql.backup -Z9 my_db


pg_restore -h 127.0.0.1 -p 5432 -U postgres -Fc -v -j$(nproc) -d my_db 
/tmp/postgresql.backup 2>&1 | tee /tmp/restore.txt





When I restore using 10.2 I see:

pg_restore: creating ACL "public.TABLE wl_week"

Do you see something similar?

My suspicion is that this might have something to do with the commits below:

https://git.postgresql.org/gitweb/?p=postgresql.git&a=search&h=HEAD&st=commit&s=+ACL

Someone with more knowledge of this will need to confirm or deny.


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-08 Thread Adrian Klaver

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot standby 
configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to first 
try to connect to the master. If that fails to then reach out to the 
standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/


That seems to be correct:

https://jdbc.postgresql.org/documentation/94/connect.html
"Connection Fail-over

To support simple connection fail-over it is possible to define multiple 
endpoints (host and port pairs) in the connection url separated by 
commas. The driver will try to once connect to each of them in order 
until the connection succeeds. If none succeed, a normal connection 
exception is thrown.


The syntax for the connection url is:

jdbc:postgresql://host1:port1,host2:port2/database"



Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource java.langNumberFormatException:for 
input string: "5432,10.16.10.13:5432"/


It looks like the first IP address has disappeared or is not set right./


To me it looks whatever code you are using is trying to us 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Can you provide more information about the JDBC version you are using 
and the app code you are using?



/

Thanks in advance for the help.


//





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-08 Thread Adrian Klaver

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot standby 
configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to first 
try to connect to the master. If that fails to then reach out to the 
standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource java.langNumberFormatException:for 
input string: "5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set right./
/

Thanks in advance for the help.


//





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-08 Thread Adrian Klaver

On 03/08/2018 01:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go about 
debugging that it's not seeing the comma?


From previous post:

Can you provide more information about the JDBC version you are using 
and the app code you are using?





On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot standby 
configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach out 
to the standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set right./
/

Thanks in advance for the help.


//











--
Adrian Klaver
adrian.kla...@aklaver.com



Re: how to get list of sequences owned by a user/role

2018-03-09 Thread Adrian Klaver

On 03/09/2018 12:08 PM, Charlin Barak wrote:

Hi,
I can find sequences owned by a schema from information_schema.sequences 
and pg_class but how do I find sequences owned by a user/role? What pg 
table should I be looking at?


https://www.postgresql.org/docs/10/static/view-pg-sequences.html



Thanks.

Charlin



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: how to get list of sequences owned by a user/role

2018-03-09 Thread Adrian Klaver

On 03/09/2018 02:22 PM, Adrian Klaver wrote:

On 03/09/2018 12:08 PM, Charlin Barak wrote:

Hi,
I can find sequences owned by a schema from 
information_schema.sequences and pg_class but how do I find sequences 
owned by a user/role? What pg table should I be looking at?


https://www.postgresql.org/docs/10/static/view-pg-sequences.html


This is new to PG 10.

For older versions:

https://www.postgresql.org/docs/9.6/static/catalog-pg-class.html





Thanks.

Charlin






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pgpass hostname and IP address

2018-03-09 Thread Adrian Klaver

On 03/09/2018 08:19 PM, Ron Johnson wrote:

(8.4 and 9.2, but soon to 9.6)

If we sometimes access a db server by IP address, and sometimes by 
hostname, must we have two entries for each server+user (one with the 
hostname and the other with IP address), or is there a way to put them 
both on the same line?




Not with .pgpass, though there is the service file method:

https://www.postgresql.org/docs/9.6/static/libpq-pgservice.html

https://www.postgresql.org/docs/10/static/libpq-connect.html#LIBPQ-CONNSTRING

service

Service name to use for additional parameters. It specifies a 
service name in pg_service.conf that holds additional connection 
parameters. This allows applications to specify only a service name so 
connection parameters can be centrally maintained. See Section 33.16.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Application Dependency/Limitations of Postgres Version Upgrade

2018-03-10 Thread Adrian Klaver

On 03/09/2018 10:49 PM, amitabh kumar wrote:

Hi,

I would like to know about application dependency of PostgreSQL version 
upgrade.


We have multiple servers with PG versions in 8.4, 9.2, 9.3, 9.4 and 9.5, 
that we want to upgrade to 9.6. We want to be sure that all applications 
will run smoothly after upgrade.


The only way I know to be sure is to is to set up a test instance of 9.6 
and test your applications. As an example of the things that could trip 
you up:


Your 8.4 --> 9.x
https://www.postgresql.org/docs/10/static/release-9-1.html#id-1.11.6.121.4

"

Change the default value of standard_conforming_strings to on 
(Robert Haas)


By default, backslashes are now ordinary characters in string 
literals, not escape characters. This change removes a long-standing 
incompatibility with the SQL standard. escape_string_warning has 
produced warnings about this usage for years. E'' strings are the proper 
way to embed backslash escapes in strings and are unaffected by this change.

Warning

This change can break applications that are not expecting it and do 
their own string escaping according to the old rules. The consequences 
could be as severe as introducing SQL-injection security holes. Be sure 
to test applications that are exposed to untrusted input, to ensure that 
they correctly handle single quotes and backslashes in text strings.


"

I would as matter of course also review the release notes for each major 
release since 8.4 to look for further gotchas.





Is there any dependency or limitation of applications in 9.6 after 
upgrading from these versions ? OS platforms we are using are in Linux 
5, 6.7. 6.8 and Windows 10, 12.


I am happy to provide more information if need.

Regards,

Amitabh
PostgreSQL DBA




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: wrong message when trying to create an already existing index

2018-03-10 Thread Adrian Klaver

On 03/10/2018 07:00 AM, legrand legrand wrote:

Hello,
When trying to create an already existing index (in pg 9.5)

SQL> create  index if not exists NEWINDEX on SCHEMA.TABLE(COL);
   > relation "NEWINDEX" already exists, skipping

message speaks about relation (and not index)


https://www.postgresql.org/docs/10/static/catalog-pg-class.html

"The catalog pg_class catalogs tables and most everything else that has 
columns or is otherwise similar to a table. This includes indexes (but 
see also pg_index), sequences (but see also pg_sequence), views, 
materialized views, composite types, and TOAST tables; see relkind. 
Below, when we mean all of these kinds of objects we speak of 
“relations”. Not all columns are meaningful for all relation types."





Would it be possible that this message reports the correct object type ?
Regards
PAscal



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: can someone send a design document of "parallel work" to me?

2018-03-12 Thread Adrian Klaver

On 03/12/2018 02:59 AM, lin wrote:

Hi, all,
      I hava read the code for "parallel work" of PostgreSQL 9.6,  I 
hope someone can give me a  design document of "parallel work".


In  search of clarity:

1) When you say parallel work are you talking about the parallel query 
features or something else?


2) When you refer to a design document are talking about one that deals 
with adding features to the code or one that provides ways of working 
with the existing features?




Thanks,
Lin





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Reindex doesn't eliminate bloat

2018-03-12 Thread Adrian Klaver

On 03/12/2018 03:05 PM, Ron Johnson wrote:

v8.4.12

According to this (https://pastebin.com/TJB32n5M) query, which I thought 
I got from https://wiki.postgresql.org/wiki/Index_Maintenance, a list of 
indexes and their bloat is generated.


After reindexing a table with a large amount of reported bloat (column 
bloat_pct says 29%), re-running the query shows no change in the amount 


First I am not seeing a column bloat_pct in the query you linked to, so 
are you sure that is the actual query you used?


of bloat.  This is a historical table, and VACUUM VERBOSE shows that 
there's nothing to free up.


Is this something that I must live with, or am I misinterpreting the query?


Honestly I have not worked my way in depth through the query you show, 
though I did notice it uses pg_stats. What happens if run ANALYZE 
(https://www.postgresql.org/docs/8.4/static/sql-analyze.html) to update 
the stats?




Thanks,




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Programmatically duplicating a schema

2018-03-12 Thread Adrian Klaver

On 03/12/2018 08:23 PM, matt.f...@internode.on.net wrote:

Hi all,

What is a reliable way to programmatically & generically populate an 
empty schema with all the objects in the public schema as a template?


We are using the multi tenancy ruby gem Apartment ( 
https://github.com/influitive/apartment ), which was recently broken by 
the changes made to pg_dump to address CVE-2018-1058 
https://nvd.nist.gov/vuln/detail/CVE-2018-1058


Apartment attempts to duplicate the public schema whenever creating a 
new schema by running:


pg_dump -s -x -0 -n public

to get the SQL statements needed to recreate the public schema & then 
executes the pg_dump's sql output after creating & switching to the new 
schema ( via set search_path to ; )


After the fix to CVE-2018-1058, all table references in pg_dump's output 
(including within SQL of stored procedures) are prefixed by the public. 
schema, which means you cannot just reuse this output in a different 
schema context without first manually changing the sql.
As a temporary fix so we can handle new customers in production, we are 
using a regex search/replace for public. in the pg_dump output, but 
clearly this is not a reliable solution for a generic gem such as Apartment.


Is there a different way this gem could now be achieving this?
Without a flag on pg_dump to force the old format, or a CREATE SCHEMA 
 WITH TEMPLATE  kind of statement, we can't 
see any alternative more reliable than the regex search/replace.


Wild idea:

1) Roll back to:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=tree;h=c987089c48afba99df0c3c2dc45fd69238b52705;hb=2840d201c6a62beb86d9671a66eeec56183d261b

2) Use that to build pre-10.3 version of pg_dump and use that to dump 
your schema. Basically long way to to force old format. Is fragile 
though as that version will lose contact with changes.



Less wild idea and previously suggested upstream.

1) Check your schema objects into scripts that are checked into version 
control w/o schema qualifications.


2) Use scripts to populate new schema.



The open issue on the Apartment gem for 
context: https://github.com/influitive/apartment/issues/532


Thanks in advance.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Programmatically duplicating a schema

2018-03-13 Thread Adrian Klaver

On 03/12/2018 11:05 PM, matt.f...@internode.on.net wrote:

Thanks Adrian,

Really appreciate the suggestions.

The objective when trying to solve this for the Apartment library itself 
is to keep it generic (works for any/all database objects - tables, 
views, stored procedures, 3rd party extension objects, etc.) & to 
require minimal configuration (preferably not having to tell the library 
your schema).


I believe pg_dump was the only way to satisfy the second point.  The 
reason we'd want to infer the create statements via pg_dump is, so we 
don't need to keep database migration files in sync with a 'create new 
schema' SQL script.  It adds risk that they get out of sync, causing 
inconsistencies in new schemas created by the library.


Seems to me your best hope is to make a plea on --hackers for a flag 
that turns off schema qualification of object names.




Assuming there's no other way to infer the create statements from the 
public schema, Ruby on Rails' structure.sql could probably be used as a 
starting point for the 'create new schema' SQL file.  It's similar 
already, however it's also generated via pg_dump (having the same issues 
as Apartment library). 
http://guides.rubyonrails.org/v3.2.9/migrations.html#types-of-schema-dumps  
This is outside the realms of this mail group though.


Cheers,

Matt.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-13 Thread Adrian Klaver

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go about 
debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach out 
to the standby.


I looked online and found the suggested way to do this, but it's not 
working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf - 
failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In other 
words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//













--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Primary key gist index?

2018-03-14 Thread Adrian Klaver

On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
Hello!  From all that I can tell, it is not possible using a btree_gist 
index as a primary key.  If so, why not?  I have a table with this gist 


https://www.postgresql.org/docs/10/static/btree-gist.html

"In general, these operator classes will not outperform the equivalent 
standard B-tree index methods, and they lack one major feature of the 
standard B-tree code: the ability to enforce uniqueness."


index which truly ought to be its primary key.  as_of_date is of range 
date type:


EXCLUDE USING gist (id WITH =, as_of_date WITH &&)

Any direction here would be much appreciated.

Right now, I am forced to create a redundant btree index UNIQUE, btree 
(id, lower(as_of_date)) in order to have a primary key on the table.


Thanks!
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-14 Thread Adrian Klaver

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what driver 
is in use. The failover syntax looks to be fairly recent, so being off 
by a little on the driver version can make a big difference.


Or you could do as this post suggests:

https://www.postgresql.org/message-id/CADK3HHJgdio_TZ-fpk4rguWaA-wWZFNZrjBft_T4jLBK_E_c8w%40mail.gmail.com

That is install driver version 42.2.1.


On 03/13/2018 05:48 PM, Adrian Klaver wrote:

On 03/13/2018 04:46 PM, chris wrote:

I'm sorry that took a few days but I am running;


Postgresql-9.4

and

jre7.jar


What we are looking for is the JDBC driver you are using?



Thanks in advance.


On 03/08/2018 02:30 PM, chris wrote:
Given that the syntax looks correct for the url, how would we go 
about debugging that it's not seeing the comma?



On 03/08/2018 02:27 PM, Adrian Klaver wrote:

On 03/08/2018 01:12 PM, chris wrote:

Hi,

I have a JDBC temping to connect to 2 postgres nodes in a hot 
standby configuration. Running postgres 9.4 on centos6.


What we are trying to accomplish is in an event of a failover, to 
first try to connect to the master. If that fails to then reach 
out to the standby.


I looked online and found the suggested way to do this, but it's 
not working.


This is the JDBC line we have:

/jdbc.url=jdbc:postgresql://10.16.10.12:5432,10.16.10.13:5432/app_db/

Here is the error we are getting:/
/

/2018-03-07 13:54:36, 994 ERROR:com.zaxxer.hikari.util.PropertyElf 
- failed to set properly port number on target class 
org.postgresql.ds.PGSimpleDataSource 
java.langNumberFormatException:for input string: 
"5432,10.16.10.13:5432"/


Lets try that again:

To me it looks like whatever code you are using is trying to use 
"5432,10.16.10.13:5432" as the port number for 10.16.10.12. In 
other words it is not seeing the ',' as a separator for the two IP's'.


Time to clean the glasses:)



It looks like the first IP address has disappeared or is not set 
right./

/

Thanks in advance for the help.


//



















--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Adrian Klaver

On 03/14/2018 10:02 AM, Alexander Farber wrote:

Thank you, Ivan! I am trying to apply your suggestion to my table -

On Wed, Mar 14, 2018 at 5:34 PM, Ivan E. Panchenko 
mailto:i.panche...@postgrespro.ru>> wrote:


Hi Alex,

SELECT  string_agg(x->>'letter','') FROM json_array_elements(

'[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8,
"row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12,
"value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2,
"letter": "D"}]'::json

) x;


# select * from words_moves where gid=656 order by played desc limit 3;
  mid  | action | gid | uid  |    played 
|
tiles | score

--++-+--+---+--+---
  1353 | swap   | 656 |    7 | 2018-03-14 17:22:18.430082+01 | 
"ЙНРР"   
| ¤
  1352 | play   | 656 | 1199 | 2018-03-14 16:55:45.898379+01 | [{"col": 
4, "row": 7, "value": 2, "letter": "С"}, {"col": 5, "row": 7, "value": 
3, "letter": "У"}]   |    19
  1351 | play   | 656 |    7 | 2018-03-14 16:38:48.132546+01 | [{"col": 
9, "row": 11, "value": 5, "letter": "Ж"}, {"col": 9, "row": 13, "value": 
2, "letter": "М"}] |    16

(3 rows)

by trying the following:

#  select string_agg(x->>'letter', ' ') from (select 
jsonb_array_elements(tiles) from words_moves where gid=656 and 
action='play' order by played desc limit 5) x;

ERROR:  42883: operator does not exist: record ->> unknown
LINE 1: select string_agg(x->>'letter', ' ') from (select jsonb_arra...
    ^
HINT:  No operator matches the given name and argument type(s). You 
might need to add explicit type casts.

LOCATION:  op_error, parse_oper.c:728

I am probably missing something obvious?


Do you still have non-arrays in the tile field?:

https://www.postgresql.org/message-id/CAADeyWgYKKaArJb6JK_xEtSO%3D7aeNaYqBu_ef-D5W7s8EFPfpQ%40mail.gmail.com



Regards
Alex





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Extract elements from JSON array and return them as concatenated string

2018-03-14 Thread Adrian Klaver

On 03/14/2018 10:12 AM, Adrian Klaver wrote:

On 03/14/2018 10:02 AM, Alexander Farber wrote:

Thank you, Ivan! I am trying to apply your suggestion to my table -

On Wed, Mar 14, 2018 at 5:34 PM, Ivan E. Panchenko 
mailto:i.panche...@postgrespro.ru>> wrote:


    Hi Alex,

    SELECT  string_agg(x->>'letter','') FROM json_array_elements(

    '[{"col": 7, "row": 12, "value": 3, "letter": "A"}, {"col": 8,
    "row": 12, "value": 10, "letter": "B"}, {"col": 9, "row": 12,
    "value": 1, "letter": "C"}, {"col": 10, "row": 12, "value": 2,
    "letter": "D"}]'::json

    ) x;


# select * from words_moves where gid=656 order by played desc limit 3;
  mid  | action | gid | uid  |    played | 
tiles | score
--++-+--+---+--+--- 

  1353 | swap   | 656 |    7 | 2018-03-14 17:22:18.430082+01 | "ЙНРР" 
| ¤
  1352 | play   | 656 | 1199 | 2018-03-14 16:55:45.898379+01 | 
[{"col": 4, "row": 7, "value": 2, "letter": "С"}, {"col": 5, "row": 7, 
"value": 3, "letter": "У"}]   |    19
  1351 | play   | 656 |    7 | 2018-03-14 16:38:48.132546+01 | 
[{"col": 9, "row": 11, "value": 5, "letter": "Ж"}, {"col": 9, "row": 
13, "value": 2, "letter": "М"}] |    16

(3 rows)

by trying the following:

#  select string_agg(x->>'letter', ' ') from (select 
jsonb_array_elements(tiles) from words_moves where gid=656 and 
action='play' order by played desc limit 5) x;

ERROR:  42883: operator does not exist: record ->> unknown
LINE 1: select string_agg(x->>'letter', ' ') from (select jsonb_arra...
    ^
HINT:  No operator matches the given name and argument type(s). You 
might need to add explicit type casts.

LOCATION:  op_error, parse_oper.c:728

I am probably missing something obvious?


Do you still have non-arrays in the tile field?:

https://www.postgresql.org/message-id/CAADeyWgYKKaArJb6JK_xEtSO%3D7aeNaYqBu_ef-D5W7s8EFPfpQ%40mail.gmail.com 


I should have looked closer before answering, yes there are:

1353 | swap   | 656 |7 | 2018-03-14 17:22:18.430082+01 | "ЙНРР"






Regards
Alex








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Primary key gist index?

2018-03-14 Thread Adrian Klaver

On 03/14/2018 11:10 AM, Jeremy Finzel wrote:


On Wed, Mar 14, 2018 at 8:33 AM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 03/14/2018 06:19 AM, Jeremy Finzel wrote:

Hello!  From all that I can tell, it is not possible using a
btree_gist index as a primary key.  If so, why not?  I have a
table with this gist


https://www.postgresql.org/docs/10/static/btree-gist.html
<https://www.postgresql.org/docs/10/static/btree-gist.html>

"In general, these operator classes will not outperform the
equivalent standard B-tree index methods, and they lack one major
feature of the standard B-tree code: the ability to enforce uniqueness."


index which truly ought to be its primary key.  as_of_date is of
range date type:

EXCLUDE USING gist (id WITH =, as_of_date WITH &&)

Any direction here would be much appreciated.

Right now, I am forced to create a redundant btree index UNIQUE,
btree (id, lower(as_of_date)) in order to have a primary key on
the table.

Thanks!
    Jeremy



-- 
Adrian Klaver

adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>


Thank you for the ref.  But I don't understand how an exclusion 
constraint does not have "the ability to enforce uniqueness" unless they 
just mean that is the case "under the covers of postgres".  That is 
exactly what it does, right?  By the definition of the exclusion index I 
have above, there cannot be more than one row with the same id and 
as_of_date values.


https://www.postgresql.org/docs/10/static/sql-createtable.html#SQL-CREATETABLE-EXCLUDE

"The EXCLUDE clause defines an exclusion constraint, which guarantees 
that if any two rows are compared on the specified column(s) or 
expression(s) using the specified operator(s), not all of these 
comparisons will return TRUE. If all of the specified operators test for 
equality, this is equivalent to a UNIQUE constraint, although an 
ordinary unique constraint will be faster. ..."


I have always taken the above to mean that while is possible to create 
an EXCLUDE that enforces uniqueness that operation is internal to the 
EXCLUDE and is not knowable to Postgres when it is looking for a UNIQUE 
index. Probably because an EXCLUDE can be non-unique.





Thanks,
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-14 Thread Adrian Klaver

On 03/14/2018 01:47 PM, chris wrote:

Oh sorry for getting the wrong information.


At this point not sure whether it is wrong information or not.



How would I get the information on what driver is currently installed?


I am not a Java programmer, so I am not the best person to answer this. 
Still I would the think the place to start would be the connection code 
itself. Another way would be examining CLASSPATH:


https://jdbc.postgresql.org/documentation/head/classpath.html




On 03/14/2018 09:44 AM, Adrian Klaver wrote:

On 03/14/2018 07:51 AM, chris wrote:


I believe its Postgresql-9.4.1208.jre7.jar


Pretty sure solving this is going to require knowing exactly what 
driver is in use. The failover syntax looks to be fairly recent, so 
being off by a little on the driver version can make a big difference.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: psql output result

2018-03-15 Thread Adrian Klaver

On 03/15/2018 07:57 AM, Tiffany Thang wrote:

Hi,
I have a SQL script that does some DDLs, inserts and counts.

The command I ran is
psql dbname -c "\i crscript.sql" > output.txt

In output.txt, I got something like
INSERT 0 1
INSERT 0 1
CREATE TABLE
INSERT 0 2
  count
---
  9
(1 row)


Is there a way to output the SQLs and DDLs so that I could easily 
identify what statements were executed?


aklaver@tito:~> psql -d test -U aklaver -a -f sql_test.sql
\pset null 'NULL'
Null display is "NULL".
CREATE TABLE tbl_test(fld_1 int, fld_2 varchar);
CREATE TABLE
INSERT INTO tbl_test VALUES (1, 'dog'), (2, 'cat');
INSERT 0 2
DROP TABLE tbl_test;
DROP TABLE



Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Want to disable fully qualified table names on pg_dump in pg_dump (PostgreSQL) 9.6.8

2018-03-15 Thread Adrian Klaver

On 03/15/2018 04:00 PM, Foolish Ewe wrote:

Hello All:

A number of our team members and I use pg_dump to export schema in an 
Ubuntu 16.04 environment, I happen to have a postgress 9.6.4 server
that runs in a docker container, and in some cases I see the following 
select statement and fully qualified table names in the

CREATE TABLE and ALTER TABLE statements:

SELECT pg_catalog.set_config('search_path', '', false);

CREATE TABLE database_name.table_name


I am pretty sure you are actually seeing:

CREATE TABLE schema_name.table_name



and likewise for ALTER TABLE.

But other users (who haven't updated their systems recently) do not see 
the SELECT statement and we see unqualified table names, e.g.:



CREATE TABLE table_name


These changes in format impact our workflows, we would prefer to have 
unqualified table names.






Does anyone have an idea of what might cause this?   Is there something 
we can do on the client side to avoid getting qualified table names?


The reason:

https://www.postgresql.org/about/news/1834/

"The purpose of this release is to address CVE-2018-1058, which 
describes how a user can create like-named objects in different schemas 
that can change the behavior of other users' queries and cause 
unexpected or malicious behavior, also known as a "trojan-horse" attack. "


More information

https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058:_Protect_Your_Search_Path

There is no flag to unset this.



With best regards:

Bill






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: error 53200 out of memory

2018-03-16 Thread Adrian Klaver

On 03/16/2018 04:00 AM, francis cherat wrote:

Hello,

we have got  an error 53200 after sql statement

[5-1] ERROR:  53200: out of memory
[6-1] DETAIL:  Failed on request of size 1572864.
[7-1] LOCATION:  AllocSetRealloc, aset.c:973

in jboss logs we have got those errors

org.jboss.logging.jboss-logging - 3.1.2.GA | ERROR: out of memory
   Détail : Failed on request of size 1572864.
executing failed
org.hibernate.exception.GenericJDBCException: could not extract ResultSet

cluster configuration
server_version  | 9.3.16
effective_cache_size                | 12800MB
shared_buffers                      | 384MB
work_mem                            | 384MB

Server configuration
RHEL 6.5
RAM : 16Go
2 CPUs

Thanks for your feedback


Is there anything in the Postgres and/or system logs from the same time 
that might shed on this?




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Nvim as external editor in psql as Postgres root user - .vimrc (environment?) issue

2018-03-16 Thread Adrian Klaver

On 03/16/2018 10:47 AM, Victoria wrote:

Hello; I am using PostgreSQL v.10.2 as a root user (sudo -u postgres -i) on my 
local Arch Linux installation.

I want to use Neovim (nvim v.0.2.2) as my external editor (\e) in psql; the 
current default is the Arch Linux default system editor, vi.

If I add this to my ~/.psqlrc (/home/victoria/.psqlrc)

   \setenv EDITOR "/usr/bin/nano"

then I can use nano, no problem.

However, if I replace that with

   \setenv EDITOR "/usr/bin/nvim"

and chown this postgres directory (to get around a .local/ permissions error 
that arises)

   sudo chown -R postgres:victoria /var/lib/postgres

when I type \e in psql I can edit in nvim.

The issue I have is that as I am in a postgres environment, my user (victoria) 
~/.vimrc file (I link my nvim.init file to it) is not being loaded, so I don't 
have access to my Vim/NeoVim settings.


I guess the question is why sudo -u postgres -i? You can access the 
server via psql from your home directory. If you want to work as 
postgres user the simplest solution would be to add the .vimrc file to 
the postgres user directory.




I tried the nvim -u "" type statements in my ~/.psqlrc, but that 
throws an error about not a valid path (again, likely due to the Pg root environment?).

Suggestions?  Thank you.
   
======





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: ora2pg and invalid command \N

2018-03-16 Thread Adrian Klaver

On 03/16/2018 10:12 AM, Charlin Barak wrote:

Hi,
I'm using ora2pg to migrate our Oracle database to Postgres. I was able 
to generate the data file using TYPE=COPY but when I attempted to load 


What was the complete command you used to generate the data file?

the file via psql, I got lots of "invalid command \N" errors. The 


Can we see the complete error?

Also a section of the data file that contains the data that caused the 
error?


resolution on the internet was not clear and was  not ora2pg related. 
How do I resolve this issue? Can someone provide some guidance?


Thanks.







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: ora2pg and invalid command \N

2018-03-16 Thread Adrian Klaver

On 03/16/2018 11:36 AM, Charlin Barak wrote:

Thanks for your response.

the NULL values from Oracle were indeed replaced by \N in the data 
output. How do I go about loading this file? I hope I do not have to 
temporary replace \N with a string in the flat file and then later 
update in Postgres.


That should not be necessary:

https://www.postgresql.org/docs/10/static/sql-copy.html

"NULL

Specifies the string that represents a null value. The default is 
\N (backslash-N) in text format, and an unquoted empty string in CSV 
format. You might prefer an empty string even in text format for cases 
where you don't want to distinguish nulls from empty strings. This 
option is not allowed when using binary format."



To  solve this is going to require more information. From my previous post:

What was the complete command you used to generate the data file?

Can we see the complete error?

Also a section of the data file that contains the data that caused the 
error?





Thanks.

102 48299   50  C   3  \N
103     48299   \N  G   3  45
...
...

On Fri, Mar 16, 2018 at 2:06 PM, Pavel Stehule <mailto:pavel.steh...@gmail.com>> wrote:




2018-03-16 18:12 GMT+01:00 Charlin Barak mailto:charlinba...@gmail.com>>:

Hi,
I'm using ora2pg to migrate our Oracle database to Postgres. I
was able to generate the data file using TYPE=COPY but when I
attempted to load the file via psql, I got lots of "invalid
command \N" errors. The resolution on the internet was not clear
and was  not ora2pg related. How do I resolve this issue? Can
someone provide some guidance?


\N is symbol for NULL. but \cmd are psql commands too. This behave
means so psql lost synchronization and try to eval data like
commands. psql import is tolerant - that means so source of this
issue is lost usually. Try to import data with option ON_ERROR_STOP


https://stackoverflow.com/questions/4480381/postgres-sql-fail-on-script-error

<https://stackoverflow.com/questions/4480381/postgres-sql-fail-on-script-error>

    Regards

    Pavel


Thanks.









--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Nvim as external editor in psql as Postgres root user - .vimrc (environment?) issue

2018-03-16 Thread Adrian Klaver

On 03/16/2018 01:06 PM, Victoria wrote:

Ok, here is a clumsy solution.


Still not sure why you want to run as the system postgres user. The 
system user postgres is not the same as the Postgres database user 
postgres. It is just convention that the system user that Postgres runs 
as is called postgres. If you want to work in Postgres as the database 
user postgres you just need to supply -U postgres to the client(psql in 
this case). You can do that from your home account(victoria) without all 
the contortions below:)




I have this entry in my /home/victoria/.psqlrc file,
   
   \setenv EDITOR "/usr/bin/nvim"


As you see below, I symlink to that file, from postgres.

[victoria@victoria ~]$ sudo -u postgres -i

[postgres@victoria ~]$ pwd

   /var/lib/postgres

[postgres@victoria ~]$ ls -la

   total 108
   drwxrwxr-x  6 postgres victoria  4096 Mar 16 12:48 .
   drwxr-xr-x 33 root root  4096 Mar 16 00:00 ..
   -rw---  1 postgres postgres   385 Mar 16 12:49 .bash_history
   -rwxr-xr-x  1 root root   806 Mar 16 12:41 .bashrc
   drwx--  2 postgres postgres  4096 Mar 16 12:23 .cache
   drwxrwxr-x  2 postgres victoria  4096 Feb 23 13:26 data
   drwx--  3 postgres postgres  4096 Mar 16 12:12 .local
   lrwxrwxrwx  1 postgres victoria62 Feb 23 15:10 .psql_history-postgres -> 
/mnt/Vancouver/Programming/RDB/postgres/postgres/.psql_history
   lrwxrwxrwx  1 postgres victoria22 Feb 23 14:59 .psqlrc -> 
/home/victoria/.psqlrc
   drwxr-xr-x  2 postgres postgres  4096 Mar 16 12:38 .vim
   -rw---  1 postgres postgres   895 Mar 16 12:48 .viminfo
   -rw-r--r--  1 postgres postgres 68234 Mar 16 12:47 .vimrc

## NOTE: .bashrc and .vimrc are edited COPIES (not symlinks) of 
/home/victoria/{.bashrc | .vimrc}

[postgres@victoria ~]$ cat /var/lib/postgres/.bashrc

   export PSQL_EDITOR="/usr/bin/nvim -u /var/lib/postgres/.vimrc"

## "/var/lib/postgres/.vimrc" is the same as "/home/victoria/.vimrc" EXCEPT
## that I commented out line 77, "execute pathogen#infect(), as that was
## throwing an error when starting nvim (Neovim) as the psql \e external editor.

## Important (slight annoyance: need to load that "postgres" .bashrc file:

[postgres@victoria ~]$ exec bash

[postgres@victoria ~]$ psql

   psql (10.2)
   Type "help" for help.

[postgres]# \e  ## can edit in Neovim, with ~/.vimrc settings, preferences, 
customizations ...

[postgres]# \q

[postgres@victoria ~]$ exit

   exit

[victoria@victoria ~]$

I wasn't able to automatically run the "exec bash" command after starting 
postgres, hence the need to manually run it in the postgres shell, prior to launching 
psql.

======





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Want to disable fully qualified table names on pg_dump in pg_dump (PostgreSQL) 9.6.8

2018-03-16 Thread Adrian Klaver

On 03/16/2018 01:55 PM, Foolish Ewe wrote:

Thank you Adrian, this is informative.

With best regards:


On a hunch:

1) Installed  SQL Workbench/J (https://www.sql-workbench.eu/index.html)

2) Set up Postgres JDBC driver(https://jdbc.postgresql.org/download.html)

3) Opened DbExplorer:
https://www.sql-workbench.eu/manual/dbexplorer.html

4) Selected all the objects in a schema and used Create DDL Script to 
generate a file with the objects. The objects where not created with a 
schema qualifier.






Bill



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Could not open extension control file “/usr/share/postgresql/10/extension/pg_similarity.control”: No such file or directory

2018-03-16 Thread Adrian Klaver

On 03/16/2018 03:18 PM, hmidi slim wrote:

Hi,
I’m trying to create the extension pg_similarity 
<https://github.com/eulerto/pg_similarity> 
(https://github.com/eulerto/pg_similarity) in a docker container.

Dockerfile:

|FROMpostgres:10ENV POSTGRES_USER userENV POSTGRES_PASSWORD userENV 
POSTGRES_DB user_db RUN apt-get update\&&apt-get upgrade -y \&&apt-get 
-y install \postgresql-server-dev-all \wget \make \gcc \&&rm -rf 
/var/lib/apt/lists/* RUN wget -c 
'pgfoundry.org/frs/download.php/2237/pg_similarity-0.0.19.tgz 
<http://pgfoundry.org/frs/download.php/2237/pg_similarity-0.0.19.tgz>' 
RUN tar -zxf pg_similarity-0.0.19.tgz RUN cd pg_similarity \ && 
USE_PGXS=1 make \ && USE_PGXS=1 make install


Then I build the image and I run the container. I exec into the container:


|psql -U user-d user_db psql (10.3(Debian 10.3-1.pgdg90+1))Type 
"help"forhelp.user_db=#createextension pg_similarity;|



I got this error:

|ERROR:could notopenextension control 
file"/usr/share/postgresql/10/extension/pg_similarity.control":No such 
fileordirectory|



Under |/usr/share/postgres/10/contrib| I find the file 
pg_similarity.sql.How can I fix this problem?


But do you see pg_similarity.control which what the error is complaining 
about?






|





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Could not open extension control file “/usr/share/postgresql/10/extension/pg_similarity.control”: No such file or directory

2018-03-17 Thread Adrian Klaver

On 03/17/2018 03:03 AM, hmidi slim wrote:
I didn't find the file pg_similarity.control when I run find / -name 
'pg_similarity.control' 2> /dev/null


In your original post you use the file from pgFoundry:

pgfoundry.org/frs/download.php/2237/pg_similarity-0.0.19.tgz

When I installed from that file I got:

/usr/bin/mkdir -p '/usr/local/pgsql10/lib64'
/usr/bin/mkdir -p '/usr/local/pgsql10/share/contrib'
/usr/bin/install -c -m 755  pg_similarity.so 
'/usr/local/pgsql10/lib64/pg_similarity.so'
/usr/bin/install -c -m 644 .//uninstall_pg_similarity.sql 
pg_similarity.sql '/usr/local/pgsql10/share/contrib/'


Note no pg_similarity.control and also it did not install to 
~/share/extension but to ~/share/contrib.


From here:

https://github.com/eulerto/pg_similarity

The latest version is 1.0 not 0.0.19. I did:

git clone https://github.com/eulerto/pg_similarity.git

and then ran the make process and got:

/usr/bin/mkdir -p '/usr/local/pgsql10/lib64'
/usr/bin/mkdir -p '/usr/local/pgsql10/share/extension'
/usr/bin/mkdir -p '/usr/local/pgsql10/share/extension'
/usr/bin/install -c -m 755  pg_similarity.so 
'/usr/local/pgsql10/lib64/pg_similarity.so'
/usr/bin/install -c -m 644 .//pg_similarity.control 
'/usr/local/pgsql10/share/extension/'
/usr/bin/install -c -m 644 .//pg_similarity--1.0.sql 
.//pg_similarity--unpackaged--1.0.sql pg_similarity.sql 
'/usr/local/pgsql10/share/extension/'



The pg_similarity.control was installed and the extension was installed 
in ~/share/extension.


More importantly I could CREATE the extension:

test=# CREATE EXTENSION pg_similarity;
CREATE EXTENSION




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: What is the meaning of pg_restore output?

2018-03-17 Thread Adrian Klaver

On 03/17/2018 07:30 AM, Alexandru Lazarev wrote:

Hi All, Thanks for your replies.
What about "item" key-word - is it appearing only when -j > 1?

Other phrases "pg_restore: processing /*item */3615 DEFAULT id
pg_restore: creating DEFAULT "public.id <http://public.id>"
are observed if pg_restore is executed with flag -j N (where N > 1)
and aren't present when -j 1 (or without -j)


2. Are they (output phrases) documented somewhere?


Grepping source in src/bin/pg_dump/:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_dump/pg_backup_archiver.c;h=8892b177905ea81b58ddbc728fd46d47e2d017c6;hb=65c6b53991e1c56f6a0700ae26928962ddf2b9fe

if (do_now)
   {
 /* OK, restore the item and update its dependencies */
 ahlog(AH, 1, "processing item %d %s %s\n",
   next_work_item->dumpId,
   next_work_item->desc, next_work_item->tag);

Which is in section with this comment:

/*
 * Main engine for parallel restore.
 *
 * Parallel restore is done in three phases.  In this first phase,
 * we'll process all SECTION_PRE_DATA TOC entries that are allowed to be
 * processed in the RESTORE_PASS_MAIN pass.  (In practice, that's all
 * PRE_DATA items other than ACLs.)  Entries we can't process now are
 * added to the pending_list for later phases to deal with.
 */


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: changing my mail address

2018-03-17 Thread Adrian Klaver

On 03/17/2018 11:31 AM, Ron Johnson wrote:

On 03/17/2018 01:08 PM, Stephen Frost wrote:

Greetings,

* Ron Johnson (ron.l.john...@cox.net) wrote:

On 03/17/2018 10:51 AM, Stephen Frost wrote:

Once you've done that, log out of all PG sites (possibly by deleteing
cookies which you may have from them) and then log into postgresql.org
first and then go to lists.postgresql.org and the update should be
passed through.
Sadly, https://www.postgresql.org/account/reset/ hasn't worked for me 
in yearss.

That wasn't what the OP was asking about, but if you're having an issue
there, then please explain exactly what you're trying to do and what
issue you're seeing.  We're happy to try and resolve any such issues.


Having https://www.postgresql.org/account/reset/ send an password email 
to the address where I receive list emails hasn't worked for me in 
years. (Yes, I looked in the Spam folder, and the Inbox, and the folder 
where -general emails are filtered to.)


I just tried it and it worked for me:

"You are receiving this e-mail because you requested a password reset 
for your

PostgreSQL community account.

Please go to the following page and choose a new password:

https://www.postgresql.org/account/reset/ODIzMA-4uk-1d498cb7a2655b5e69ac/

Your username, in case you've forgotten, is aklaver.

"



When I'd stopped using PG for a while, the simplest solution was to send 
them to the Trash.






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Logical replication: duplicate key problem

2018-03-19 Thread Adrian Klaver

On 03/19/2018 12:30 AM, Johann Spies wrote:

I have followed the following process:

* pg_dump --schema-only on server 1
* restored that schema-only dump on server 2
* created a publication on server 1 including all the tables on server 1
* created a subscription on server 2

Server 2 does not get updated data for the schema involved from
anywhere else than the logical replication.

For some tables (some of them having many millions of records) the
process of replication seems to go smoothly. But for too many tables I
get this type of error messages:

2018-03-18 08:00:45.915 SAST [13512] ERROR:  duplicate key value
violates unique constraint "country_pkey"
2018-03-18 08:00:46.088 SAST [13513] ERROR:  duplicate key value
violates unique constraint "abstract_id_key"


Are you getting the same errors on server 1?

Are you sure nothing else is touching server 2?



In many of those cases it involves a serial field.  In at least one
case in involved a citext field with a unique constraint.

Now just wonder how logical replication between the two servers can
produce such errors if the constraints on both sides are the same?  Is
this a bug?

Regards
Johann




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Adrian Klaver

On 03/19/2018 09:31 AM, Jimmy Augustine wrote:

Dear Friends,

I am newbie to postgresql.
I have 162 GB on my database but when I check size of all tables, I 
approximately obtain 80 GB.
I also see that I have 68GB of temporary files however I only found 
2.4MB at postgres/data/base/pgsql_tmp.


Exactly how did you determine this?



Could you tell me what are those temporary files and where are they at? 
Can I delete some of them?


All values come from pgAdmin 4 and checked by my own SQL 
queries(postgresql-9.6).


Can you show actual queries used?


I already run vacuum full and there is few dead tuples.

Best regards,
Jimmy AUGUSTINE



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Adrian Klaver

On 03/19/2018 10:04 AM, Jimmy Augustine wrote:



2018-03-19 17:45 GMT+01:00 Adrian Klaver <mailto:adrian.kla...@aklaver.com>>:


On 03/19/2018 09:31 AM, Jimmy Augustine wrote:

Dear Friends,

I am newbie to postgresql.
I have 162 GB on my database but when I check size of all
tables, I approximately obtain 80 GB.
I also see that I have 68GB of temporary files however I only
found 2.4MB at postgres/data/base/pgsql_tmp.


Exactly how did you determine this?

I used this command and sum result for all database :
SELECT pg_size_pretty(pg_total_relation_size('table_name'));

And this for complete database :
SELECT pg_size_pretty(pg_database_size('Database Name'));



So where did the 68GB number for temporary files come from?



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Adrian Klaver

On 03/19/2018 10:12 AM, Jimmy Augustine wrote:



     On 03/19/2018 09:31 AM, Jimmy Augustine wrote:

         Dear Friends,

         I am newbie to postgresql.
         I have 162 GB on my database but when I check size of all
         tables, I approximately obtain 80 GB.
         I also see that I have 68GB of temporary files however
I only
         found 2.4MB at postgres/data/base/pgsql_tmp.


     Exactly how did you determine this?

I used this command and sum result for all database :
SELECT pg_size_pretty(pg_total_relation_size('table_name'));

And this for complete database :
SELECT pg_size_pretty(pg_database_size('Database Name'));


So where did the 68GB number for temporary files come from?

I don't measure this value by my own. I was disappointed by the gap 
between the two queries, so I checked pgAdmin 4 and I saw this value.


In what section of pgAdmin4?

Or do you know what query it used?




-- 
Adrian Klaver

adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Adrian Klaver

On 03/19/2018 10:17 AM, Melvin Davidson wrote:






*
*I think your problem is that SELECT 
pg_size_pretty(pg_total_relation_size('table_name')); only looks at the 
current database




*
*but SELECT pg_size_pretty(pg_database_size('Database Name')); looks at 
ALL databases.


Not according to here:

https://www.postgresql.org/docs/10/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT


*
*Try this query instead to show individual database sizes.

SELECT oid,
    datname,
    pg_size_pretty(pg_database_size(datname))as size_pretty,
    pg_database_size(datname) as size,
    (SELECT pg_size_pretty (SUM( pg_database_size(datname))::bigint)
   FROM pg_database)  AS total,
    ((pg_database_size(datname) / (SELECT SUM( 
pg_database_size(datname))
    FROM pg_database) ) * 
100)::numeric(6,3) AS pct

   FROM pg_database
   ORDER BY datname;
*

--
*Melvin Davidson**
Maj. Database & Exploration Specialist**
Universe Exploration Command – UXC***
Employment by invitation only!



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Adrian Klaver

On 03/19/2018 10:29 AM, Jimmy Augustine wrote:








That's aggregated. Not current values.

Ah did you know some documentation about that ?


https://www.postgresql.org/docs/10/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW



Andreas


--
2ndQuadrant - The PostgreSQL Support Company





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Adrian Klaver

On 03/19/2018 10:27 AM, Jimmy Augustine wrote:

I tried this query and my database size is equal to 162GB.



Well you can always look in $DATA directly. The database will be under 
$DATA/base/.


You can find the  like this:

select oid, datname from  pg_database where datname='';


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: COPY error when \. char

2018-03-20 Thread Adrian Klaver

On 03/20/2018 04:32 AM, Nicolas Paris wrote:

Hello

I get an error when loading this kind of csv:


test.csv:

"hello ""world"""
"\."
"this
works
"
"this
\.
does
not"


table:

create table test (field text);


sql:

\copy test (field) from 'test.csv' CSV  quote '"' ESCAPE '"';
ERROR:  unterminated CSV quoted field
CONTEXTE : COPY test, line 7: ""this
"

Apparently, having the \.  string in a single line make it break.
Is this normal ?


https://www.postgresql.org/docs/10/static/sql-copy.html
"Because backslash is not a special character in the CSV format, \., the 
end-of-data marker, could also appear as a data value. To avoid any 
misinterpretation, a \. data value appearing as a lone entry on a line 
is automatically quoted on output, and on input, if quoted, is not 
interpreted as the end-of-data marker. If you are loading a file created 
by another application that has a single unquoted column and might have 
a value of \., you might need to quote that value in the input file."




Thanks





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Foreign Key locking / deadlock issue.

2018-03-20 Thread Adrian Klaver

On 03/20/2018 04:46 AM, HORDER Phil wrote:

Hi,

I’m trying to understand why I’m getting a deadlock issue, and how to 
work around it.


At base, I think the problem is:

1.Updates to a parent table are creating row level write locks,

2.updates to a child table set the foreign key value to the parent 
table, which are then blocked.


While investigating, I found the row locking documentation, which says 
that I can request read locks that don’t block.


But my sample code still gets blocked.

https://www.postgresql.org/docs/9.6/static/explicit-locking.html

Can someone explain what I’m missing please?

parent process

---

start transaction;

select * from pl where pl_id in (2,3) for no key update of pl;  (1)

update pl set m_number = '234' where pl_id = 2; (2)

update pl set m_number = '345' where pl_id = 3; (3)

child process

---

start transaction;

select pl_id from pl where pl_id in (2,3) for key share of pl;  (4)

update eln set pl_id = 3 where event_id = 303;  (5)

update eln set pl_id = 2 where event_id = 302;  (6)



I do not see a commit for either transaction.



My Parent process inserts and updates on the PL table, but never changes 
the key value.


I am assuming when you say key value you are referring to PRIMARY KEY?

What is the key(PK) column?



My Child process inserts and updates on the ELN table, and can set the 
FK reference value to the PL table.


I can understand that the default lock on the PL update will block the 
foreign key check from the ELN table.


Why does this example still get blocked?

Is there a way around this without dropping the foreign key?

(And if so…. How can I get this to work in Spring Data / JPA?)

Phil Horder

Database Mechanic

Thales

Land and Air Systems

Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: error 53200 out of memory

2018-03-20 Thread Adrian Klaver

On 03/20/2018 12:08 AM, francis cherat wrote:

Hello,

there is no message in /var/log/messages


How about the Postgres logs?



Regards



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL 9.6 Temporary files

2018-03-20 Thread Adrian Klaver

On 03/20/2018 03:16 AM, Jimmy Augustine wrote:

Thanks all for your response,

$du -h $MY_DATA/base/$BASE_OID/ returns 162GB but when I execute this query:

|SELECT stats.relname AS table, 
pg_size_pretty(pg_relation_size(statsio.relid)) AS table_size, 
pg_size_pretty(pg_total_relation_size(statsio.relid) - 
pg_relation_size(statsio.relid)) AS related_objects_size, 
pg_size_pretty(pg_total_relation_size(statsio.relid)) AS 
total_table_size, stats.n_live_tup AS live_rows FROM 
pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS 
stats USING (relname) WHERE stats.schemaname = current_schemaUNION ALL 
SELECT 'TOTAL' AS table, 
pg_size_pretty(sum(pg_relation_size(statsio.relid))) AS table_size, 
pg_size_pretty(sum(pg_total_relation_size(statsio.relid) - 
pg_relation_size(statsio.relid))) AS related_objects_size, 
pg_size_pretty(sum(pg_total_relation_size(statsio.relid))) AS 
total_table_size, sum(stats.n_live_tup) AS live_rows FROM 
pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS 
stats USING (relname) WHERE stats.schemaname = current_schemaORDER BY 
live_rows ASC;


|

|I obtain 80GB in total_table_size (half of my database), where are 
missing data at?


First of all you are using pg_statio_user_tables which does not count 
system tables.


Second pretty sure the use of current_schema is limiting the results to 
only one schema in the database.



|


2018-03-19 19:32 GMT+01:00 Adrian Klaver <mailto:adrian.kla...@aklaver.com>>:


On 03/19/2018 10:27 AM, Jimmy Augustine wrote:

I tried this query and my database size is equal to 162GB.


Well you can always look in $DATA directly. The database will be
under $DATA/base/.

You can find the  like this:

select oid, datname from  pg_database where datname='';


-- 
Adrian Klaver

adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Foreign Key locking / deadlock issue.

2018-03-20 Thread Adrian Klaver

On 03/20/2018 06:55 AM, HORDER Phil wrote:

Please reply to list also.
Ccing list.

Still not certain what the PK is or what key value refers to?


Well this is just sample SQL to demonstrate the problem.
If each process executes lines of code alternately, a deadlock occurs.

The commit would obviously be required by an application, but one transaction 
would be cancelled by the deadlock exception, and the demonstrator would 
rollback the other.

Phil Horder
Database Mechanic



-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: 20 March 2018 13:51
To: HORDER Phil; pgsql-general@lists.postgresql.org
Subject: Re: Foreign Key locking / deadlock issue.

On 03/20/2018 04:46 AM, HORDER Phil wrote:

Hi,

I'm trying to understand why I'm getting a deadlock issue, and how to
work around it.

At base, I think the problem is:

1.Updates to a parent table are creating row level write locks,

2.updates to a child table set the foreign key value to the parent
table, which are then blocked.

While investigating, I found the row locking documentation, which says
that I can request read locks that don't block.

But my sample code still gets blocked.

https://www.postgresql.org/docs/9.6/static/explicit-locking.html

Can someone explain what I'm missing please?

parent process

---

start transaction;

select * from pl where pl_id in (2,3) for no key update of pl;  (1)

update pl set m_number = '234' where pl_id = 2; (2)

update pl set m_number = '345' where pl_id = 3; (3)

child process

---

start transaction;

select pl_id from pl where pl_id in (2,3) for key share of pl;  (4)

update eln set pl_id = 3 where event_id = 303;  (5)

update eln set pl_id = 2 where event_id = 302;  (6)



I do not see a commit for either transaction.



My Parent process inserts and updates on the PL table, but never
changes the key value.


I am assuming when you say key value you are referring to PRIMARY KEY?

What is the key(PK) column?



My Child process inserts and updates on the ELN table, and can set the
FK reference value to the PL table.

I can understand that the default lock on the PL update will block the
foreign key check from the ELN table.

Why does this example still get blocked?

Is there a way around this without dropping the foreign key?

(And if so.. How can I get this to work in Spring Data / JPA?)

Phil Horder

Database Mechanic

Thales

Land and Air Systems

Horizon House, Throop Road, Templecombe, Somerset, BA8 0DH, UK




--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Restore - disable triggers - when they fired?

2018-03-20 Thread Adrian Klaver

On 03/20/2018 07:31 AM, Durumdara wrote:

Dear Members!

I saw in PGAdmin 3/4 that pg_restore have an option "disable triggers".

Because we need to move some databases in the near future I have to know 
about the meaning of this option.


I wrote a table with an BEFORE UPDATE trigger:

create table tr_test
(
id integer not null primary key,
value1 varchar(100),
value2 varchar(100)
);

insert into tr_test values(1, 'a', 'a');
insert into tr_test values(2, 'b', 'b');

CREATE OR REPLACE FUNCTION tfbu_tr_test()
   RETURNS trigger AS
$BODY$
begin
     new.value2 = cast(current_timestamp as varchar(30));
     RETURN NEW;
end;
$BODY$
   LANGUAGE plpgsql VOLATILE
   COST 100;


  CREATE TRIGGER tbi_tr_test
   BEFORE INSERT
   ON tr_test
   FOR EACH ROW
   EXECUTE PROCEDURE public.tfbu_tr_test();

insert into tr_test values(3, 'c', 'c');
select * from tr_test;


and I tried to dump and restore in PGAdmin IV.

The dumped data is same as I read after restore.

The pg_restore log shows me that triggers and indexes created after data 
copy.


At this point I confused in "disable triggers" option.

When it would be useful?


https://www.postgresql.org/docs/10/static/app-pgrestore.html
"--disable-triggers

This option is relevant only when performing a data-only restore. 
It instructs pg_restore to execute commands to temporarily disable 
triggers on the target tables while the data is reloaded. Use this if 
you have referential integrity checks or other triggers on the tables 
that you do not want to invoke during data reload.


Presently, the commands emitted for --disable-triggers must be done 
as superuser. So you should also specify a superuser name with -S or, 
preferably, run pg_restore as a PostgreSQL superuser.





Firstly I supposed that data copy somehow could start the triggers - but 
how?


Which triggers? Or how they fired with this order?

Or they remains as disabled AFTER the backup for next, by hand 
manipulations?


So please light my mind a little!

Thank you!

Regards
dd



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Restore - disable triggers - when they fired?

2018-03-20 Thread Adrian Klaver

On 03/20/2018 07:56 AM, Durumdara wrote:

Dear Adrian!


2018-03-20 15:47 GMT+01:00 Adrian Klaver <mailto:adrian.kla...@aklaver.com>>:




When it would be useful?


https://www.postgresql.org/docs/10/static/app-pgrestore.html
<https://www.postgresql.org/docs/10/static/app-pgrestore.html>
"--disable-triggers

     This option is relevant only when performing a data-only
restore. It instructs pg_restore to execute commands to temporarily
disable triggers on the target tables while the data is reloaded.
Use this if you have referential integrity checks or other triggers
on the tables that you do not want to invoke during data reload.

     Presently, the commands emitted for --disable-triggers must be
done as superuser. So you should also specify a superuser name with
-S or, preferably, run pg_restore as a PostgreSQL superuser.




Firstly I supposed that data copy somehow could start the
triggers - but how?

Which triggers? Or how they fired with this order?



I have read it, but I don't understand it.

Do you have a good example?


create table disable_trigger_test(id int PRIMARY KEY, fld_1 text);

insert into disable_trigger_test values (1, 'dog'), (2, 'cat');


test=> select * from disable_trigger_test ;
 id | fld_1
+---
  1 | dog
  2 | cat

pg_dump --disable-triggers -d test -U aklaver -t disable_trigger_test -a 
-f disable_trigger_test_data.sql



CREATE OR REPLACE FUNCTION public.trigger_test()
 RETURNS trigger
 LANGUAGE plpgsql
AS $function$
begin
new.fld_1 := new.fld_1 || 'test';
RETURN NEW;
end;
$function$;

CREATE TRIGGER tr_test  
  BEFORE INSERT
  ON disable_trigger_test
  FOR EACH ROW
  EXECUTE PROCEDURE public.trigger_test();



truncate disable_trigger_test ;

#Note I do this as a superuser.
psql -d test -U postgres -f disable_trigger_test_data.sql

test=> select * from disable_trigger_test ;
 id | fld_1
+---
  1 | dog
  2 | cat

test=> insert into disable_trigger_test values (3, 'fish');
INSERT 0 1
test=> select * from disable_trigger_test ;
 id |  fld_1
+--
  1 | dog
  2 | cat
  3 | fishtest
(3 rows)




Thanks!
dd



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Adrian Klaver

On 03/20/2018 08:23 AM, Stuart McGraw wrote:

I recently installed Ubuntu-17.10 and then discovered that
Postgresql from the Pgdg repos is only supported on Ubuntu LTS
releases (eg 16.04).  However info on the internet said pg-10
could be installed from Pgdg zesty repo, which with some package
version conflicts, I was able to do so and have a functional
pg-10.1 install.

However I need to upgrade to 10.3 and the normal "apt update;
apt upgrade" does not offer 10.3.  Also, doing a fresh install
still installs 10.1.


https://wiki.postgresql.org/wiki/Apt
"2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed it 
from their mirrors "




Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
one upgrade to it?


18.04 LTS (Bionic Beaver)?:

http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FDW Foreign Table Access: strange LOG message

2018-03-20 Thread Adrian Klaver

On 03/20/2018 11:52 AM, Albrecht Dreß wrote:

Hi all,

I use Postgres 10.3 on a Debian Stretch system with foreign tables, and 
noticed strange LOG messages when accessing them.


The data base setup is basically

---8<-- 


CREATE EXTENSION IF NOT EXISTS postgres_fdw WITH SCHEMA public;
CREATE SERVER ext_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (
     dbname 'ext_db', host 'localhost', updatable 'false');
CREATE FOREIGN TABLE public.ext_table (
     […]
) SERVER ext_server OPTIONS (schema_name 'public', table_name 
'some_table', updatable 'false');
---8<-- 



Now I use the following trivial Python2 (psycopg2) code to access the 
tables:


---8<-- 


import psycopg2
conn = psycopg2.connect(dbname='my_db')
with conn.cursor() as csr:
    csr.execute("SELECT * FROM […] LIMIT 1")
    csr.fetchone()
conn.close()
---8<-- 



When I access a “local” table of my_db in the SELECT statement, there is 
no LOG message.  However, reading from ext_table works just fine, but 
the conn.close() statement above triggers the log message


---8<-- 

[time stamp/pid] user@my_db LOG:  could not receive data from client: 
Connection reset by peer
---8<-- 



Did I miss something in the setup here?  Or is there any other way to 
get rid of the message (which fills ~95% of my logs)?


My suspicion is it has to do with this:

https://www.postgresql.org/docs/10/static/postgres-fdw.html#id-1.11.7.43.10
"
F.34.2. Connection Management

postgres_fdw establishes a connection to a foreign server during the 
first query that uses a foreign table associated with the foreign 
server. This connection is kept and re-used for subsequent queries in 
the same session. However, if multiple user identities (user mappings) 
are used to access the foreign server, a connection is established for 
each user mapping.

"

Guessing that conn.close() is not closing the internal FDW connection 
properly. Not sure how to fix, I will ponder some though.




Thanks in advance,
Albrecht.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FATAL: semctl(15073290, 4, SETVAL, 0) failed: Invalid argument

2018-03-20 Thread Adrian Klaver

On 03/20/2018 11:57 AM, JotaComm wrote:

​Hello,

Today I found this message in my Postgres log:

FATAL:  semctl(15073290, 4, SETVAL, 0) failed: Invalid argument
user=,db=,app=,host= LOG:  server process (PID 30741) exited with exit 
code 1

user=,db=,app=,host= LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.


Postgres: PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc 
(Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit


Linux: Linux INVST-APP-01A 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 
14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


FATAL:  semctl(15073290, 4, SETVAL, 0) failed: Invalid argument
user=,db=,app=,host= LOG:  server process (PID 30741) exited with exit 
code 1

user=,db=,app=,host= LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back 
the current transaction and exit, because another server process exited 
abnormally and possibly corrupted shared memory.
HINT:  In a moment you should be able to reconnect to the database and 
repeat your command.


Any suggestions to discovery about this behavior?


?:
https://www.postgresql.org/docs/10/static/kernel-resources.html
"
18.4.2. systemd RemoveIPC

If systemd is in use, some care must be taken that IPC resources (shared 
memory and semaphores) are not prematurely removed by the operating 
system. This is especially of concern when installing PostgreSQL from 
source. Users of distribution packages of PostgreSQL are less likely to 
be affected, as the postgres user is then normally created as a system user.


The setting RemoveIPC in logind.conf controls whether IPC objects are 
removed when a user fully logs out. System users are exempt. This 
setting defaults to on in stock systemd, but some operating system 
distributions default it to off.


A typical observed effect when this setting is on is that the semaphore 
objects used by a PostgreSQL server are removed at apparently random 
times, leading to the server crashing with log messages like


LOG: semctl(1234567890, 0, IPC_RMID, ...) failed: Invalid argument

Different types of IPC objects (shared memory vs. semaphores, System V 
vs. POSIX) are treated slightly differently by systemd, so one might 
observe that some IPC resources are not removed in the same way as 
others. But it is not advisable to rely on these subtle differences.


A “user logging out” might happen as part of a maintenance job or 
manually when an administrator logs in as the postgres user or something 
similar, so it is hard to prevent in general.


What is a “system user” is determined at systemd compile time from the 
SYS_UID_MAX setting in /etc/login.defs.


Packaging and deployment scripts should be careful to create the 
postgres user as a system user by using useradd -r, adduser --system, or 
equivalent.


Alternatively, if the user account was created incorrectly or cannot be 
changed, it is recommended to set


RemoveIPC=no

in /etc/systemd/logind.conf or another appropriate configuration file.
Caution

At least one of these two things has to be ensured, or the PostgreSQL 
server will be very unreliable.

"



Thanks a lot.

Best regards

--
JotaComm
http://jotacomm.wordpress.com



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-20 Thread Adrian Klaver

On 03/20/2018 02:25 PM, Stuart McGraw wrote:

On 03/20/2018 02:19 PM, Christoph Berg wrote:
 > Re: Adrian Klaver 2018-03-20 
<4c40e7c5-efa7-00d7-b891-acc9c1ec7...@aklaver.com>

 >>> However I need to upgrade to 10.3 and the normal "apt update;
 >>> apt upgrade" does not offer 10.3.  Also, doing a fresh install
 >>> still installs 10.1.
 >>
 >> https://wiki.postgresql.org/wiki/Apt
 >> "2018-01-17: Ubuntu zesty (17.04) is unsupported now, Ubuntu removed 
it from

 >> their mirrors "
 >>
 >>> Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
 >>> one upgrade to it?
 >>
 >> 18.04 LTS (Bionic Beaver)?:
 >>
 >> http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/
 >
 > Ack. We skipped 17.10 and went straight to supporting the upcoming
 > 18.04 (you can already install it). Sorry, there's only 24h a day :(
 >
 > Christoph

Thank you Christoph and Adrian.
I changed the apt source to
   deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main

Then (transcripts edited for brevity)...
   # apt-get dist-update


I am going  to say that was apt-get update. More below.


   ...
   # apt-get dist-upgrade
   The following packages will be REMOVED:
     pgadmin3
   The following packages will be upgraded:
     pgadmin3-data postgresql postgresql-client postgresql-client-common
     postgresql-common postgresql-contrib postgresql-doc postgresql-doc-10
     postgresql-server-dev-10
   9 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.

after completing the update and rebooting:

   $ psql --version
   psql (PostgreSQL) 10.1
   $ psql -c 'select version()'
   PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
6.3.0-12ubuntu2) 6.3.0 20170406, 64-bit


I suspect the above is coming from the Ubuntu repo, not the PGDG one. As 
an example from an Ubuntu machine that is using the PGDG repo:


psql --version 



psql (PostgreSQL) 10.3 (Ubuntu 10.3-1.pgdg16.04+1) 






postgres=# select version();

version 

 

 PostgreSQL 10.3 (Ubuntu 10.3-1.pgdg16.04+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit 

(1 row) 






Likely I am missing something obvious due to my newness with Ubuntu,
but isn't the above supposed to work?  10.3 is a minor upgrade, yes?


Yes it is a minor upgrade.

What does pg_lsclusters show?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-21 Thread Adrian Klaver

On 03/20/2018 10:52 PM, Stuart McGraw wrote:
Looks like these posts are coming through a news group to me.
I am Ccing list to get response back there.



Is Pgdg 10.3 even available for ubuntu 17.10?  How the heck does
one upgrade to it?


18.04 LTS (Bionic Beaver)?:

http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/


I tried doing a full reinstall of Ubuntu-17.10 followed by a postgresql
install from the bionic repo (instead of installing 10.1 per above and
trying to upgrade)

   # apt-get -qy install postgresql postgresql-client postgresql-contrib \
   postgresql-doc pgadmin3 postgresql-server-dev-10 libpq-dev
   Reading package lists...
   Building dependency tree...
   Reading state information...
    Some packages could not be installed. This may mean that you have
   requested an impossible situation or if you are using the unstable
   distribution that some required packages have not yet been created
   or been moved out of Incoming.
   The following information may help to resolve the situation:

   The following packages have unmet dependencies:
    libpq-dev : Depends: libpq5 (= 10.3-1.pgdg18.04+1) but it is not 
going to be installed
    pgadmin3 : Depends: libgcrypt20 (>= 1.8.0) but 1.7.8-2ubuntu1 is to 
be installed
   Depends: libpq5 (>= 8.4~) but it is not going to be 
installed

   Recommends: pgagent but it is not going to be installed
    postgresql : Depends: postgresql-10 but it is not going to be installed
    postgresql-client : Depends: postgresql-client-10
    postgresql-contrib : Depends: postgresql-contrib-10
   E: Unable to correct problems, you have held broken packages.


If it where me I would simplify the above for the moment to :

apt-get install postgresql-10



Is there any reason now not to conclude that the 10.3 bionic version is
simply incompatible with Ubuntu-17.10 (at least without a lot more package
wrangling chops than I have)?

One can install postgresql-10.1 but one cannot upgrade it to get security
fixes or to be able to load data dumped from another 10.3 database.

Given that Ubuntu-18.04 will be out soon I guess this is pretty much moot
except for a few unfortunates like me who absolutely need 10.3 but have no
option to upgrade.  I guess the lesson is that running the Pgdg versions
of Postgresql on any but the LTS versions of Ubuntu is pretty risky.
Live and learn.  Maybe this will help someone else.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Restore - disable triggers - when they fired?

2018-03-21 Thread Adrian Klaver

On 03/21/2018 02:41 AM, Durumdara wrote:

Dear Adrian!

2018-03-20 16:33 GMT+01:00 Adrian Klaver <mailto:adrian.kla...@aklaver.com>>:


On 03/20/2018 07:56 AM, Durumdara wrote:


pg_dump --disable-triggers -d test -U aklaver -t
disable_trigger_test -a -f disable_trigger_test_data.sql


CREATE OR REPLACE FUNCTION public.trigger_test()
...
truncate disable_trigger_test ;

#Note I do this as a superuser.
psql -d test -U postgres -f disable_trigger_test_data.sql

test=> select * from disable_trigger_test ;
  id | fld_1
+---
   1 | dog
   2 | cat


Thank you! Now I got it!
So the difference is:

1.) FULL DB restore into empty database creates the extra objects at the 
end, so I do not need to worry about triggers.


This also holds if you do a single(or multiple) table restore of both 
the table schema and data at the same time.




2.) Partial data restore into an existing database to existing tables 
with triggers: this point I need to set "disable triggers" option.


Well partial or complete data restore, anything that will trip the 
triggers on an existing table.




Very-very thank you!


Glad it helped.



dd



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Foreign Key Validation after Reference Table Ownership Change

2018-03-21 Thread Adrian Klaver

On 03/21/2018 10:48 AM, Louis Battuello wrote:




The point is you can't resolve a name like "schema_1.something" unless
you have USAGE on schema_1.  So the RI-checking query, which is run as
the owner of the table, fails at parse time.


That certainly makes sense for user_2 that owns the reference table and is 
blocked by not having usage on the reference table’s schema.

But, user_1 owns both schemas and has usage on both but no longer owns the 
reference table in one schema. Why is user_1’s insert on the referencing table 
failing? Is the validation of the FK no longer done as user_1?



From Tom's post:
"
The point is you can't resolve a name like "schema_1.something" unless
you have USAGE on schema_1.  So the RI-checking query, which is run as
 ^ 


the owner of the table, fails at parse time."
^^^

It is not the user that is doing the INSERT that matters it is the user 
that owns the table that matters.


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-21 Thread Adrian Klaver

On 03/21/2018 10:59 AM, Stuart McGraw wrote:

On 03/21/2018 07:02 AM, Adrian Klaver wrote:

On 03/20/2018 10:52 PM, Stuart McGraw wrote:
Looks like these posts are coming through a news group to me.
I am Ccing list to get response back there.


Is this something I am doing wrong?  I am posting through the gmane 
newsgroup

which in turn is bi-directionally gatewayed to the list AIUI.


Probably more on my side. I use Reply All and that tells me it will not 
post back to a newsgroup address, so unless the -general address is the 
From: or Cc: it is just going back to you directly From: me only. I just 
CCed the list back in to get the thread back in front of more eyes.







If it where me I would simplify the above for the moment to :

apt-get install postgresql-10


Tried on a fresh Ubuntu-17.10 install (with no postgresql at all installed)
but similar problem persists with the postgresql from the pgdg bionic repo:

   # apt-get install postgresql-10
   ...
   The following packages have unmet dependencies:
    postgresql-10 : Depends: postgresql-client-10
    Depends: libicu60 (>= 60.1-1~) but it is not 
installable
    Depends: libpq5 (>= 9.3~) but it is not going to be 
installed

    Depends: libssl1.1 (>= 1.1.0) but it is not installable
    Recommends: sysstat but it is not going to be installed
   E: Unable to correct problems, you have held broken packages.



Looks like forward compatibility issues, your 17.10 does not have the 
newer versions of files needed by the 18.04 Postgres packages and it 
will not install them as they probably would break other programs in 
17.10.  If it does not work going forward maybe it will work going 
backward, see if you have any better luck using the 16.04(Xenial) repo.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FDW Foreign Table Access: strange LOG message

2018-03-21 Thread Adrian Klaver

On 03/21/2018 11:15 AM, Albrecht Dreß wrote:

Hi Adrian & Tom:

Thanks a lot for your input!

Am 20.03.18 20:38 schrieb(en) Tom Lane:


See comments inline below.


Unfortunately, in my original post, I confused the primary and secondary 
(accessed via the FDW) data bases in my example when I anonymised their 
identifiers…  Sorry for the confusion!  The /real/ structure is


     client ---[libpq]--> my_db ---[FDW]--> ext_db

and the LOG message is actually related to the “internal” connection, 
i.e. ext_db (*not* my_db) complains about “could not receive data” after 
accessing the table in ext_db through the FDW from my_db.


The effect is not limited to Python, but happens with psql, too.  I.e. 
running (ext_table is the foreign table mapped into my_db, and a mapping 
exists only for user “user”)


sudo -u user /usr/lib/postgresql/10/bin/psql my_db <<< "select * from 
ext_table limit 1;"


*also* triggers the LOG message


Does it also happen when you open a psql session and do?:

psql>select * from ext_table limit 1;



[time stamp/pid] user@ext_db LOG:  could not receive data from client: 
Connection reset by peer


I don't see any such log messages when testing postgres_fdw here, 
which is odd; why are my results different?


Yes, that's strange!  I have the following deb packages (from the 
stretch-pgdg repo) installed:


postgresql-10    10.3-1.pgdg90+1 amd64
postgresql-client-10 10.3-1.pgdg90+1 amd64
postgresql-client-common 190.pgdg90+1    all
postgresql-common    190.pgdg90+1    all
libpq5:amd64 10.3-1.pgdg90+1 amd64


I am not seeing --contrib which is where I would expect postgres_fdw to 
come from.


Is --contrib installed?

If not where is postgres_fdw coming from?



If these are about the FDW connections, maybe the answer is that 
postgres_fdw ought to establish a backend-exit callback in which it 
can shut down its connections gracefully.


Hmm, yes, sounds like a sensible explanation.


If it's doing that now, I sure don't see where.


So, if I understand you correctly, this needs to be addressed in 
postgres_fdw, and there is nothing I can fix in my setup (except for 
suppressing LOG messages) to get rid of the message?


Thanks again,
Albrecht.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-21 Thread Adrian Klaver

On 03/21/2018 01:31 PM, Stuart McGraw wrote:

On 03/21/2018 12:14 PM, Adrian Klaver wrote:

On 03/21/2018 10:59 AM, Stuart McGraw wrote:

On 03/21/2018 07:02 AM, Adrian Klaver wrote:

On 03/20/2018 10:52 PM, Stuart McGraw wrote:

[...]

If it where me I would simplify the above for the moment to :
apt-get install postgresql-10


Tried on a fresh Ubuntu-17.10 install (with no postgresql at all 
installed)
but similar problem persists with the postgresql from the pgdg bionic 
repo:


   # apt-get install postgresql-10
   ...
   The following packages have unmet dependencies:
    postgresql-10 : Depends: postgresql-client-10
    Depends: libicu60 (>= 60.1-1~) but it is not 
installable
    Depends: libpq5 (>= 9.3~) but it is not going to 
be installed
    Depends: libssl1.1 (>= 1.1.0) but it is not 
installable
    Recommends: sysstat but it is not going to be 
installed

   E: Unable to correct problems, you have held broken packages.


Looks like forward compatibility issues, your 17.10 does not have the
newer versions of files needed by the 18.04 Postgres packages and it
will not install them as they probably would break other programs in
17.10.  If it does not work going forward maybe it will work going
backward, see if you have any better luck using the 16.04(Xenial)
repo.


Thanks, I forgot that the older repos also received the pg-10 update.
Unfortunately but no luck with Xenial either, slightly different
but similar conflicts.

My main motivation for updating to 10.3 was to be able to load data
dumped from a 10.3 database.  pg_restore complained about "unsupported
version (1.13) in file header".  However I just discovered I can load
a plain sql dump from that database so panic mode is over :-) and I
can wait until I can upgrade my machine to ubuntu-18.04.


Yeah, pg_restore from 10.x < 10.3 will not restore a custom format dump 
from 10.3. Pretty sure it has to do with this:


https://wiki.postgresql.org/wiki/A_Guide_to_CVE-2018-1058:_Protect_Your_Search_Path



It still seems to me that the best advice for using Postgresql on
Ubuntu is to use the Ubuntu version of Postgresql if you don't need
the latest version; if you do need latest version, use the Pgdg
version but only with a LTS version of Ubuntu.

If you need the latest version of both Ubuntu and Postgresql, you
may be out of luck.


I stick to LTS releases of Ubuntu as the in between releases tend to be 
too experimental for my taste:) Also the upgrades come to close together.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-21 Thread Adrian Klaver

On 03/21/2018 01:16 PM, chris wrote:
I wasnt able to find what version we had installed so we went ahead and 
reinstalled it


Maybe I am missing something, but if you could not find the version you 
where using how do you know installing a new driver actually changed the 
version you are using now?





we downloaded the current version JDBC 4.1 Driver 42.2.1.jre7


We are still having the same problem.

Thanks








--
Adrian Klaver
adrian.kla...@aklaver.com



Re: postgresql-10.3 on unbuntu-17.10 - how??

2018-03-21 Thread Adrian Klaver

On 03/21/2018 01:59 PM, Stuart McGraw wrote:

On 03/21/2018 02:37 PM, Rob Sargent wrote:

Thanks, I forgot that the older repos also received the pg-10 update.
Unfortunately but no luck with Xenial either, slightly different
but similar conflicts.

My main motivation for updating to 10.3 was to be able to load data
dumped from a 10.3 database.  pg_restore complained about "unsupported
version (1.13) in file header".  However I just discovered I can load
a plain sql dump from that database so panic mode is over :-) and I
can wait until I can upgrade my machine to ubuntu-18.04.

It still seems to me that the best advice for using Postgresql on
Ubuntu is to use the Ubuntu version of Postgresql if you don't need
the latest version; if you do need latest version, use the Pgdg
version but only with a LTS version of Ubuntu.

If you need the latest version of both Ubuntu and Postgresql, you
may be out of luck.


Or you compile it?


That was going to be my next step.  But I don't think a simple compile
from source would do because Ubuntu's package manager wouldn't be aware
that Postgresql was now available to satisfy other packages' dependencies.
So I would need to rebuild the Ubuntu source package.  I have done that
on Fedora several times where it has been, in my limited experience,
usually simple and problem free.  But I have read that building packages
on Ubuntu is much more arcane so I wasn't looking forward to it.


That is pretty much the case when you build from source, it will live 
outside the OS packaging universe. I have built from source on Ubuntu it 
is not any more difficult then other distros, just remember to install 
build-essential. As far a dependencies basically the only things that 
will have a Postgres dependency will be other Postgres software e.g. 
psycopg2, etc. That means you will need to build them from source also, 
though that is helped along by pg_config which will find your source 
install and build the other software to match. It also means 
uninstalling the Ubuntu Postgres packages so you don't cross contaminate.












--
Adrian Klaver
adrian.kla...@aklaver.com



Re: JDBC connectivity issue

2018-03-21 Thread Adrian Klaver

On 03/21/2018 01:56 PM, chris wrote:
I did the re install not to change versions but to now know what version 
I am running


My previous question was not as clear as should have been.
So:
1) At some place in your software stack there is some sort of 
configuration that links your app via JDBC to a Postgres JDBC driver. If 
you know where that configuration is you should be able to find the 
driver and presumably the version.
2) So when you say you did a reinstall do you mean you are now pointing 
the configuration at postgresql-42.2.1.jre7.jar? FYI 
postgresql-42.2.2.jre7.jar is actually the latest:

https://jdbc.postgresql.org/download.html




On 03/21/2018 02:44 PM, Adrian Klaver wrote:

On 03/21/2018 01:16 PM, chris wrote:
I wasnt able to find what version we had installed so we went ahead 
and reinstalled it


Maybe I am missing something, but if you could not find the version 
you where using how do you know installing a new driver actually 
changed the version you are using now?





we downloaded the current version JDBC 4.1 Driver 42.2.1.jre7


We are still having the same problem.

Thanks














--
Adrian Klaver
adrian.kla...@aklaver.com



Re: FDW Foreign Table Access: strange LOG message

2018-03-22 Thread Adrian Klaver

On 03/22/2018 11:08 AM, Albrecht Dreß wrote:

Hi Adrian:

Am 21.03.18 19:25 schrieb(en) Adrian Klaver:

Does it also happen when you open a psql session and do?:

psql>select * from ext_table limit 1;


No messages are printed whilst the session is open.  However, when I 
quit the session by typing “\q” /after/ entering the command above, the 
LOG message appears.  It does not appear when I query tables from my_db 
exclusively, though, i.e. when I do not access the Foreign Table.


Seems to back up Tom Lanes's observation:

https://www.postgresql.org/message-id/30297.1521574736%40sss.pgh.pa.us
"If these are about the FDW connections, maybe the answer is that
postgres_fdw ought to establish a backend-exit callback in which
it can shut down its connections gracefully.  If it's doing that
now, I sure don't see where."

Looks like it might be time to file a bug report:

https://www.postgresql.org/account/submitbug/


I am not seeing --contrib which is where I would expect postgres_fdw 
to come from.


Apparently, it is packaged with the main server deb file:

albrecht@odysseus:~$ dpkg -S $(find /usr/share/postgresql -name "*fdw*")
postgresql-10: /usr/share/postgresql/10/extension/file_fdw--1.0.sql
postgresql-10: /usr/share/postgresql/10/extension/file_fdw.control
postgresql-10: /usr/share/postgresql/10/extension/postgres_fdw--1.0.sql
postgresql-10: /usr/share/postgresql/10/extension/postgres_fdw.control

…and additionally it contains /usr/lib/postgresql/10/postgres_fdw.so, of 
course.  I double-checked the package 
http://apt.postgresql.org/pub/repos/apt/pool/main/p/postgresql-10/postgresql-10_10.3-1.pgdg90+1_amd64.deb; 
it is installed properly.


My mistake, I was looking at an install that had Postgres 9.6. In that 
case the files for 9.6 where in --contrib, presumably because 10 is now 
the default.





Is --contrib installed?


No.  But the deb file postgresql-contrib_10+190.pgdg90+1_all.deb 
contains only the changelog and copyright files.



If not where is postgres_fdw coming from?


See above…

Cheers,
Albrecht.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Use pgloader with FDW

2018-03-23 Thread Adrian Klaver

On 03/23/2018 03:55 AM, Patricia DECHANDOL wrote:

Hello,

I’m new to Postgresql and try to use the FDW with mysql database.

Everything is OK to create my FDW and foreign tables, but I meet a 
problem when I try to do a select on a FOREIGN table containing datetime 
column which contains the value “-00-00 00:00:00” in mysql.


That is MySQL's version of NULL for datetimes. It is not a valid 
datetime though.




Select on the foreign table fails.

The mysql datetime column has been automatically wrapped to “timestamp 
without timezone” column in the foreign table by the instruction “import 
foreign schema” that I used.




How can I deal with this ?


https://github.com/EnterpriseDB/mysql_fdw/issues/38



I read about the pgloader with can manage this king of problem, but 
can’t figure out how to use it with FDW.


Thanks for your help.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Use pgloader with FDW

2018-03-23 Thread Adrian Klaver

On 03/23/2018 06:07 AM, Patricia DECHANDOL wrote:

Hello Adrian,

So, if I well understand, the only solution is to wrap the mysql datetime columns to 
"text" columns in my foreign tables.
And then use a cast function to convert from text to date in Postgre when I 
want to use these columns ?


There is also the option of changing the values in the MySQL database to 
either an actual datetime or NULL. Though there is the potential issue 
of what that would do to code that is pulling from the MySQL database.




No other way.
The pgloader can't be used by the FDW to manage this point ?


It has been a while since I used pgloader, but from what I remember it 
is a tool for doing the initial migration of data from MySQL/Sqlite/SQL 
Server to Postgres. What you seem to be looking for is pgloader to sit 
between the MySQL database and the Postgres one and do the 
transformation in real time. AFAIK that is not possible.






-Message d'origine-
De : Adrian Klaver 
Envoyé : vendredi 23 mars 2018 13:52
À : Patricia DECHANDOL ; 
pgsql-general@lists.postgresql.org
Objet : Re: Use pgloader with FDW

On 03/23/2018 03:55 AM, Patricia DECHANDOL wrote:

Hello,

I'm new to Postgresql and try to use the FDW with mysql database.

Everything is OK to create my FDW and foreign tables, but I meet a
problem when I try to do a select on a FOREIGN table containing
datetime column which contains the value "-00-00 00:00:00" in mysql.


That is MySQL's version of NULL for datetimes. It is not a valid datetime 
though.



Select on the foreign table fails.

The mysql datetime column has been automatically wrapped to "timestamp
without timezone" column in the foreign table by the instruction
"import foreign schema" that I used.




How can I deal with this ?


https://github.com/EnterpriseDB/mysql_fdw/issues/38



I read about the pgloader with can manage this king of problem, but
can't figure out how to use it with FDW.

Thanks for your help.




--
Adrian Klaver
adrian.kla...@aklaver.com




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Foreign Key locking / deadlock issue.... v2

2018-03-24 Thread Adrian Klaver

On 03/24/2018 11:03 AM, HORDER Phil wrote:


I'm running Postgres 9.6.1 (I think)


To find out for sure do:

psql> select version();





Phil Horder
Database Mechanic



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with postgreSQL

2018-03-26 Thread Adrian Klaver

On 03/26/2018 04:41 AM, Gian mario Caggiu wrote:
Hi, I'm a student and I downloaded postgreSQL to my Mac but I found a 
problem. The program does not start, I do not understand why, but that's 


Where did you download it from?

What version of Postgres?

How did you install it?

not what scares me. The fact is that a virtual user has been created 
called 'postgreSQL' which has appropriated all the functions of the 
administrator user. Can I kindly ask you why and how did this happen?



Gian Mario Caggiu



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with postgreSQL

2018-03-27 Thread Adrian Klaver

On 03/27/2018 07:18 AM, Gian mario Caggiu wrote:
Please reply to list also.
Ccing list.



I downloaded the first time 9.6 version, and when i saw that it doesn’t work i 
downloaded an other version, 10.3, but Also this doesn’t work. All downloads 
from http://www.postgresql.org .
And i install it how the website advised me.


There are six choices for Mac:

https://www.postgresql.org/download/macosx/

You will need to be more specific on which option you chose.

Also what Web site are you referring to when talking about install 
instructions?


When you say did not work, can you be more specific, with error messages 
if possible?





Gian Mario Caggiu


Il giorno 27 mar 2018, alle ore 06:03, Adrian Klaver 
 ha scritto:


On 03/26/2018 04:41 AM, Gian mario Caggiu wrote:
Hi, I'm a student and I downloaded postgreSQL to my Mac but I found a problem. 
The program does not start, I do not understand why, but that's


Where did you download it from?

What version of Postgres?

How did you install it?


not what scares me. The fact is that a virtual user has been created called 
'postgreSQL' which has appropriated all the functions of the administrator 
user. Can I kindly ask you why and how did this happen?
Gian Mario Caggiu



--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Adrian Klaver

On 03/27/2018 11:00 AM, Ken Beck wrote:

I recently upgraded my OS from Ubuntu 14.04 LTS to 16.04 LTS and since,
postgresql refuses to re-start. I am concerned that if I poke around too
much, I may lose my old data. I have consulted various web postings here
and in ubuntu forums, and have not found an obvious solution. But please
excuse me, I am not a Linux expert, and I had a friend help me with
setup issues several years ago. They are no longer available, and I am
having trouble getting things to run.


At a guess when you did the dist-upgrade(14.04 --> 16.04) you got the 
default version of Postgres for 16.04(9.6)


Try, at the command line:

pg_lsclusters




Again, this is after upgrading from ubuntu 14.04 LTS to 16.04 LTS.
Consulting the book "PostgreSQL: Up and Running", I have looked at my
.conf files in /etc/postgresql/9.3/main, things seem to be in order.

Here are the un-commented lines of postgresql.conf: (I have NOT changed
these, they are the settings that used to work in ubuntu 14.04)

NOTE: my comments have // at beginning of the line, these lines are NOT
in the .conf file

data_directory = '/var/lib/postgresql/9.3/main'        # use data in
another directory
hba_file = '/etc/postgresql/9.3/main/pg_hba.conf'    # host-based
authentication file
ident_file = '/etc/postgresql/9.3/main/pg_ident.conf'    # ident
configuration file

external_pid_file = '/var/run/postgresql/9.3-main.pid'            #
write an extra PID file

listen_addresses = 'localhost'        # what IP address(es) to listen on;
port = 5432                # (change requires restart)
max_connections = 100            # (change requires restart)
unix_socket_directories = '/var/run/postgresql'    # comma-separated
list of directories
ssl = false                # (change requires restart)
ssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem'        # (change
requires restart)
ssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key'        # (change
requires restart)

// In the logging section, these are the only entries (is this why I
cant see any log files when I try to start the server??)
log_line_prefix = '%t '            # special values:
log_timezone = 'Navajo'

// In Client Connections section, these are the only items active:
datestyle = 'iso, mdy'
timezone = 'Navajo'
lc_messages = 'en_US.UTF-8'            # locale for system error message
                     # strings
lc_monetary = 'en_US.UTF-8'            # locale for monetary formatting
lc_numeric = 'en_US.UTF-8'            # locale for number formatting
lc_time = 'en_US.UTF-8'                # locale for time formatting
default_text_search_config = 'pg_catalog.english'

The pg_ctl.conf file is basically empty

The pg_hba.conf file has these active lines:
local   all postgres    trust
local   all all peer
host    all all 127.0.0.1/32    trust
host    all all ::1/128 md5

The start.conf file has one active line, 'auto'

The pg_ident.conf file has no entries.

When I try to start postgresql service using this line:
service postgresql reload

then check running services using this:
systemctl list-units --type service --all
I see the lines related to postgres as follows:
postgresql.service loaded    active   exited  PostgreSQL
RDBMS
● postgresql@9.3-main.service    loaded    failed   failed
PostgreSQL Cluster 9.3-main

And, looking for log files, I find none.

If I try to start pgAdmin, I see two servers on localhost, when I tried
to connect, I get an error first that says it can not open the log file,
on clicking that I get an error can't open file /home/XXX/.pgpass,
permission denied. Then I get a prompt for the password for postgres,
and when I enter the known password for the user, I then get a big 'Guru
Error' box with a message:
could not connect to server: Connection refused Is the server running on
host "127.0.0.1" and accepting TCP/IP connections on port 5432?
Note that the postgresql.conf file does list 5432 as the port number.

I hope someone has the patience to look this over and suggest remedies. --

Ken Beck
Liberty, Utah, USA





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with postgreSQL

2018-03-27 Thread Adrian Klaver

On 03/27/2018 11:39 AM, Gian mario Caggiu wrote:


I downloaded two versions, 9.6.8 and 10.3, but both did not work. I 
followed the instructions on the website and in the page where I have to 
choose the version and the operating system I chose the two versions 
that I have already told you. then the installation of the program did 
not finish displaying a message with 'the database cluster 
initialization failed postgresql mac os'


Which of the downloads did you select from below?:

https://www.postgresql.org/download/macosx/

Without that information it is not possible to provide an answer.



Il giorno 27 mar 2018, alle ore 18:01, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> ha scritto:


On 03/27/2018 07:18 AM, Gian mario Caggiu wrote:
Please reply to list also.
Ccing list.


I downloaded the first time 9.6 version, and when i saw that it 
doesn’t work i downloaded an other version, 10.3, but Also this 
doesn’t work. All downloads fromhttp://www.postgresql.org 
<http://www.postgresql.org/>.

And i install it how the website advised me.


There are six choices for Mac:

https://www.postgresql.org/download/macosx/

You will need to be more specific on which option you chose.

Also what Web site are you referring to when talking about install 
instructions?


When you say did not work, can you be more specific, with error 
messages if possible?




Gian Mario Caggiu
Il giorno 27 mar 2018, alle ore 06:03, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> ha 
scritto:



On 03/26/2018 04:41 AM, Gian mario Caggiu wrote:
Hi, I'm a student and I downloaded postgreSQL to my Mac but I found 
a problem. The program does not start, I do not understand why, but 
that's


Where did you download it from?

What version of Postgres?

How did you install it?

not what scares me. The fact is that a virtual user has been 
created called 'postgreSQL' which has appropriated all the 
functions of the administrator user. Can I kindly ask you why and 
how did this happen?

Gian Mario Caggiu



--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>



--
Adrian Klaver
adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-27 Thread Adrian Klaver

On 03/27/2018 01:46 PM, Ken Beck wrote:

I am working on two systems, one running in Oracle VirtualBox on my
laptop, the other in a DigitalOcean droplet. I know on one of them I
tried to remove the postgres-9.6, and it must have been my laptop, here
is the output from pg_lsclusters:

Ver Cluster Port Status Owner    Data directory   Log file
9.3 main    5432 down   postgres /var/lib/postgresql/9.3/main
/var/log/postgresql/postgresql-9.3-main.log


What happens if you do?:

sudo pg_ctlcluster 9.3 main start



Is it possible my effort to remove 9.6 was not complete or otherwise
screwed things up? (I dont remember the commands I used to do that, sorry)




Ken Beck
Liberty, Utah, USA





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread Adrian Klaver

On 03/27/2018 03:36 PM, armand pirvu wrote:

Hi all


I have a process of some data manipulation and ultimate transfer to a postgres 
database
A DML statement gest concoted with the transformed data and pusshed into a 
named pipe
The named pipe is tailed -f in the background like this

nohup $SHELL <

Could it be that pg_stat_activity shows nothing because the DML has 
completed when you look?


Does the data find its way into the database?




Any suggestions how to approach this/make it better/monitor ?

Thanks
-- Armand






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread Adrian Klaver

On 03/27/2018 04:07 PM, armand pirvu wrote:

As long as the connection stays up yes data gets fine across
In pg_stat_activity I see the node ip address where tail -f piped into psql 
happens


So what does the rest of that record show? In particular for:

state
query
backend_start

and any others you might think are important from here:

https://www.postgresql.org/docs/10/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW





Sent from my iPhone


On Mar 27, 2018, at 6:03 PM, Adrian Klaver  wrote:


On 03/27/2018 03:36 PM, armand pirvu wrote:
Hi all
I have a process of some data manipulation and ultimate transfer to a postgres 
database
A DML statement gest concoted with the transformed data and pusshed into a 
named pipe
The named pipe is tailed -f in the background like this
nohup $SHELL <

Could it be that pg_stat_activity shows nothing because the DML has completed 
when you look?

Does the data find its way into the database?


Any suggestions how to approach this/make it better/monitor ?
Thanks
-- Armand



--
Adrian Klaver
adrian.kla...@aklaver.com





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: connection dropped from the backend server

2018-03-27 Thread Adrian Klaver

On 03/27/2018 07:21 PM, armand pirvu wrote:


On Mar 27, 2018, at 6:49 PM, Tom Lane <mailto:t...@sss.pgh.pa.us>> wrote:


Adrian Klaver <mailto:adrian.kla...@aklaver.com>> writes:

On 03/27/2018 04:07 PM, armand pirvu wrote:

As long as the connection stays up yes data gets fine across
In pg_stat_activity I see the node ip address where tail -f piped 
into psql happens



So what does the rest of that record show? In particular for:


I wonder how often data gets put into the pipe.  If it's "not very often",
maybe the connection from psql to the server is timing out due to
inactivity?  This would be the fault of a firewall or something in
between.  You could probably fix it by enabling (more aggressive) TCP
keepalive settings.

regards, tom lane


Well there is no flow pattern, The flow can be inexistent for days , 
even weeks and then it can get super busy


The data flows as expected well untill the connection gets dropped. 
Bolded from pg_stat_activity (a test I just did)


birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
  datname |  pid  |  client_addr  | client_port |         backend_start 
         |          query_start          | state

-+---+---+-+---+---+
  birstdb | 10046 |               |          -1 | 2018-03-27 
20:40:11.721804-05 | 2018-03-27 20:47:27.118434-05 | active
* birstdb | 10082 | 192.168.1.187 |       55859 | 2018-03-27 
20:43:55.301216-05 | 2018-03-27 20:46:07.190064-05 | idle*



ps -fu armandp
      UID   PID  PPID   C    STIME TTY         TIME CMD
  armandp  3264  3263   0 20:39:13 pts/2       0:00 tail -f 
/u1/sys_admin/dba/ingres2birst.fifo
  armandp  3265  3263   0 20:39:13 pts/2       0:00 psql -U csidba -d 
birstdb -h 172.16.10.93




Give it about 10 min at most and  bam out it goes

birstdb=# select datname, pid, client_addr, client_port, backend_start, 
query_start, state from pg_stat_Activity;
  datname |  pid  | client_addr | client_port |         backend_start   
       |          query_start          | state

-+---+-+-+---+---+
  birstdb | 10208 |             |          -1 | 2018-03-27 
20:51:25.835382-05 | 2018-03-27 21:08:47.164249-05 | active


Although the above two processes are still out

I think the tcp keep alives might help but I am also thinking like a 
each min check maybe and if things got in the pipe well dump ‘em to 
Postgres. Something along these lines


Any ideas/suggestions you might have to improve this ? I am not saying 


Yeah, dump the named pipe idea and just create the connection for the 
duration of the DML event.


it is perfect far from it, but I kinda took the model/idea from the 
Nagios named pipe only that one too runs at x seconds/minutes interval 
defined




Thank you both
— Armand



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Problem with postgreSQL

2018-03-28 Thread Adrian Klaver

On 03/28/2018 02:36 AM, Gian mario Caggiu wrote:
Please also reply to list so this can be seen by more eyes.
Ccing list

I've already written to you what versions I downloaded, 10.3 and 9.6.8. 
I clicked in the 'download the installer' section and then I selected 
the version to download.


So this section, correct?:
"
Interactive installer by EnterpriseDB

Download the installer certified by EnterpriseDB for all supported 
PostgreSQL versions.

"


For reference, what version of OS X?

After downloading the installer you ran the GUI installer, correct?

Where there any errors when you ran the installation process?




FYI, there are other choices on this page:

https://www.postgresql.org/download/macosx/

One that may be easier to deal with at this stage is:

Postgres.app

Postgres.app is a simple, native macOS app that runs in the menubar 
without the need of an installer. Open the app, and you have a 
PostgreSQL server ready and awaiting new connections. Close the app, and 
the server shuts down.



http://postgresapp.com/





Il giorno 27 mar 2018, alle ore 20:44, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> ha scritto:


Which of the downloads did you select from below?:

https://www.postgresql.org/download/macosx/

Without that information it is not possible to provide an answer.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-28 Thread Adrian Klaver

On 03/28/2018 09:24 AM, Moreno Andreo wrote:

Il 27/03/2018 22:57, Adrian Klaver ha scritto:

On 03/27/2018 01:46 PM, Ken Beck wrote:

I am working on two systems, one running in Oracle VirtualBox on my
laptop, the other in a DigitalOcean droplet. I know on one of them I
tried to remove the postgres-9.6, and it must have been my laptop, here
is the output from pg_lsclusters:

Ver Cluster Port Status Owner    Data directory Log file
9.3 main    5432 down   postgres /var/lib/postgresql/9.3/main
/var/log/postgresql/postgresql-9.3-main.log


What happens if you do?:

sudo pg_ctlcluster 9.3 main start 
If possible, I'll take a look at 
/var/log/postgresql/postgresql-9.3-main.log.

That may give us a big help to uncover what's goin'on.


See this post:
https://www.postgresql.org/message-id/5bad1f01-d06a-1526-96d2-c9cfd6062a11%40gmail.com



Cheers

Moreno.-






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-28 Thread Adrian Klaver

On 03/28/2018 03:45 PM, Ken Beck wrote:

Thanks for both responses -- the file contents I sent were from file
named postgresql-9.3-main.log.1, and I am assuming this means from the


Well that is an archived log and would show old information. The current 
log would be postgresql-9.3-main.log.



9.3, not 9.6. Sorry I did not include that earlier. Starting to sousnds
like I need to consult Ubuntu user groups.


Not necessarily as there are many Ubuntu users on this list.

Did you try?:

sudo pg_ctlcluster 9.3 main start



Ken Beck
Liberty, Utah, USA

On 03/28/2018 10:46 AM, Moreno Andreo wrote:

Il 28/03/2018 18:28, Adrian Klaver ha scritto:

On 03/28/2018 09:24 AM, Moreno Andreo wrote:

Il 27/03/2018 22:57, Adrian Klaver ha scritto:

On 03/27/2018 01:46 PM, Ken Beck wrote:

I am working on two systems, one running in Oracle VirtualBox on my
laptop, the other in a DigitalOcean droplet. I know on one of them I
tried to remove the postgres-9.6, and it must have been my laptop,
here
is the output from pg_lsclusters:

Ver Cluster Port Status Owner    Data directory Log file
9.3 main    5432 down   postgres /var/lib/postgresql/9.3/main
/var/log/postgresql/postgresql-9.3-main.log


What happens if you do?:

sudo pg_ctlcluster 9.3 main start

If possible, I'll take a look at
/var/log/postgresql/postgresql-9.3-main.log.
That may give us a big help to uncover what's goin'on.


See this post:
https://www.postgresql.org/message-id/5bad1f01-d06a-1526-96d2-c9cfd6062a11%40gmail.com


Yes I saw it, but I can't say if that log was from 9.3 or 9.6
cluster... as you can see above he wrote that he had a 9.6, that he
tried to remove, and a 9.3, which was not starting. So it's not clear
to me if the log he sent was from one version or the another...





Cheers

Moreno.-

















--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Postgresql 9.3 Server will not start after Ubuntu Upgrade

2018-03-28 Thread Adrian Klaver

On 03/28/2018 07:29 PM, Ken Beck wrote:

Comments in line.


The current log is 0 length, and indeed empty.

I just tried 'sudo pg_ctlcluster 9.3 main start'  and got the following
error:

Error: Config owner (kcb:1000) and data owner (postgres:116) do not
match, and config owner is not root


So why is your config owned by you and the data owned by postgres?

Your configuration files should be under:

/etc/postgresql/9.3/main/

and on my Ubuntu setups anyway is owned by postgres.



Changed to user postgres, tried again and go >
Warning: the cluster will not be running as a systemd service. Consider
using systemctl:
   sudo systemctl start postgresql@9.3-main
Removed stale pid file.

Next tried the suggestion:

sudo systemctl start postgresql@9.3-main, but the system failed to
accept the password for user postgres. That is NOT something I changed


My experience with Ubuntu is that the postgres system user does not have 
a password. So where you using the database postgres user password? For 
more detail see here:


https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-16-04


during the Ubuntu upgrade process, so that has been a puzzle as well. My
main login, user kcb is OK, password still works.

So I change back to user kcb and try the suggestion again:

kcb: jobs(0)$ sudo systemctl start postgresql@9.3-main
Job for postgresql@9.3-main.service failed because the control process
exited with error code. See "systemctl status
postgresql@9.3-main.service" and "journalctl -xe" for details.
kcb: jobs(0)$ systemctl status postgresql@9.3-main.service
● postgresql@9.3-main.service - PostgreSQL Cluster 9.3-main
    Loaded: loaded (/lib/systemd/system/postgresql@.service; disabled;
vendor preset: enabled)
    Active: failed (Result: exit-code) since Wed 2018-03-28 19:51:50 MDT;
47s ago
   Process: 2960 ExecStart=postgresql@%i --skip-systemctl-redirect %i
start (code=exited, status=

Mar 28 19:51:50 kcb-VirtualBox systemd[1]: Starting PostgreSQL Cluster
9.3-main...
Mar 28 19:51:50 kcb-VirtualBox postgresql@9.3-main[2960]: Error: Config
owner (kcb:1000) and dat


Again the error with the config owner being you. Seems it would help to 
track down what copies of the conf files are installed on your machine.




Mar 28 19:51:50 kcb-VirtualBox systemd[1]: postgresql@9.3-main.service:
Control process exited,
Mar 28 19:51:50 kcb-VirtualBox systemd[1]: Failed to start PostgreSQL
Cluster 9.3-main.
Mar 28 19:51:50 kcb-VirtualBox systemd[1]: postgresql@9.3-main.service:
Unit entered failed stat
Mar 28 19:51:50 kcb-VirtualBox systemd[1]: postgresql@9.3-main.service:
Failed with result 'exit

Is it possible a big problem is that the password for user postgres not
longer works for some reason, not sure why. It appears the password can
not be rest or changed without knowing the original, and what I thought
was the original no longer works.


Remember there are two postgres users in play here, the system user and 
the database one. Which one are you referring to?





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: unreliable behaviour of track_functions

2018-03-31 Thread Adrian Klaver

On 03/31/2018 04:40 PM, pinker wrote:

Hi All!

I've been experimenting with track_functions options and what I've saw it's
really puzzling me.
Documentation says:
/ SQL-language functions that are simple enough to be "inlined" into the
calling query will not be tracked, regardless of this setting./

But it came up, it depends on much more factors, like duration or placing it
in the query, it is totally non-deterministic behaviour.

This really simple SQL function:
CREATE FUNCTION a(a bigint)
   RETURNS bigint
STABLE
LANGUAGE SQL
AS $$
SELECT $1
$$;

Is not shown in the pg_stat_user_functions at all. It is started to be shown
when one line:
select pg_sleep(1);

is added???

Another one, gets tracked only if I use:
SELECT get_unique_term(2556);

If it lands in FROM then it's not tracked...
SELECT * FROM get_unique_term(2556);

That's the body of the function:
CREATE FUNCTION get_unique_term(i_game_pin bigint)
   RETURNS TABLE(term text, category text)
STABLE
LANGUAGE SQL
AS $$
SELECT
   i.term,
   i.dict_category_id
FROM (SELECT
 categories.term,
 categories.dict_category_id
   FROM categories
   EXCEPT ALL
   SELECT
 games.term,
 games.category
   FROM games
   WHERE game_pin = $1) i
ORDER BY (random())
LIMIT 1;
$$;

What's going on here? That's pretty unreliable behaviour...


?:

https://www.postgresql.org/docs/10/static/monitoring-stats.html#PG-STAT-USER-FUNCTIONS-VIEW

"...But if you want to see new results with each query, be sure to do 
the queries outside any transaction block. Alternatively, you can invoke 
pg_stat_clear_snapshot(), which will discard the current transaction's 
statistics snapshot (if any). The next use of statistical information 
will cause a new snapshot to be fetched.


A transaction can also see its own statistics (as yet untransmitted to 
the collector) in the views pg_stat_xact_all_tables, 
pg_stat_xact_sys_tables, pg_stat_xact_user_tables, and 
pg_stat_xact_user_functions. These numbers do not act as stated above; 
instead they update continuously throughout the transaction.


"



My version of postgres:
  PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3
20140911 (Red Hat 4.8.3-9), 64-bit

show track_functions;
  track_functions
-
  all
(1 wiersz)



  





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: unreliable behaviour of track_functions

2018-03-31 Thread Adrian Klaver

On 03/31/2018 05:17 PM, pinker wrote:

Adrian Klaver-4 wrote

https://www.postgresql.org/docs/10/static/monitoring-stats.html#PG-STAT-USER-FUNCTIONS-VIEW

"...But if you want to see new results with each query, be sure to do
the queries outside any transaction block. Alternatively, you can invoke
pg_stat_clear_snapshot(), which will discard the current transaction's
statistics snapshot (if any). The next use of statistical information
will cause a new snapshot to be fetched.

A transaction can also see its own statistics (as yet untransmitted to
the collector) in the views pg_stat_xact_all_tables,
pg_stat_xact_sys_tables, pg_stat_xact_user_tables, and
pg_stat_xact_user_functions. These numbers do not act as stated above;
instead they update continuously throughout the transaction.

"
Adrian Klaver



adrian.klaver@



Thank you for that, but does this strange behaviour should not be better
documented?


This came from the documentation, so I am not sure what you mean by 
better documented?



If somebody (like me, I have some time ago being unaware of this behaviour)
wants to build a monitoring system that base on the view
pg_stat_user_functions, should not be informed what to expect?


See comment above.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



  1   2   3   4   5   6   7   8   9   10   >