PG 10 logical replication version dependency?

2018-03-02 Thread Colin 't Hart
Hi,

Is there a version dependency when using logical replication? Do both
sides have to be running the same major version? Or is it a wire
protocol that will be backwards compatible in future releases?

I sincerely hope it's the latter so that users of this aren't forced
to upgrade all of their interconnected servers at the same time.

Thanks,

Colin



Re: PG 10 logical replication version dependency?

2018-03-02 Thread Thomas Kellerer
Colin 't Hart schrieb am 02.03.2018 um 09:23:
> Is there a version dependency when using logical replication? Do both
> sides have to be running the same major version? Or is it a wire
> protocol that will be backwards compatible in future releases?
> 
> I sincerely hope it's the latter so that users of this aren't forced
> to upgrade all of their interconnected servers at the same time.

I think the only restriction is that connected servers need to be Postgres 10 
or higher. 

So you should be able to replicate from Postgres 10 to Postgres 11 once the 
next version is released.






How can I include sql file in pgTAP unittest?

2018-03-02 Thread Stéphane Klein
Hi,

context: I would like to write UnitTest to test pgSQL triggers which use
postgres_fdw extension.
I use pgTAP  to write this UnitTest (I use this Docker
environment poc-postgresql-pgTAP
).

All works perfectly with this test file:

BEGIN;
  SELECT plan(1);

  CREATE EXTENSION IF NOT EXISTS postgres_fdw;
  DROP SERVER IF EXISTS db2 CASCADE;
  CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'db2',
dbname 'db2');
  CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password
'password');

  CREATE SCHEMA IF NOT EXISTS db2;

  IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;

  SELECT ok(
(SELECT COUNT(host_id) FROM db2.hosts) = 1,
'foobar'
  );

  -- ;
ROLLBACK;

Now, I would like to extract db2 initialization in separate file
"/test/init.sql" with this content:

  CREATE EXTENSION IF NOT EXISTS postgres_fdw;
  DROP SERVER IF EXISTS db2 CASCADE;
  CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'db2',
dbname 'db2');
  CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password
'password');

  CREATE SCHEMA IF NOT EXISTS db2;

  IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;

Now, my test file is:

BEGIN;
  SELECT plan(1);

  \i /test/init.sql

  SELECT ok(
(SELECT COUNT(host_id) FROM db2.hosts) = 1,
'foobar'
  );

In log I see that "init.sql" file is loaded with success:

Running tests: /test/*.sql -v
/test/init.sql ... No subtests run

But I have this error:

ERROR:  user mapping not found for "db2"

Question: where is my mistake? How can I include some sql file in my test?

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane


Tracking changes DML in history log table

2018-03-02 Thread Łukasz Jarych
Hello,

i want to track changes (DML) within one table like here:

https://www.cybertec-postgresql.com/en/tracking-changes-in-postgresql/

I need script or tool which will be generating SQL script for each row with
possibility
to rollback each statement,

please help Guys,
Best wishes,
Jacek


Re: How can I include sql file in pgTAP unittest?

2018-03-02 Thread Adrian Klaver

On 03/02/2018 01:17 AM, Stéphane Klein wrote:

Hi,

context: I would like to write UnitTest to test pgSQL triggers which use 
postgres_fdw extension.
I use pgTAP  to write this UnitTest (I use this 
Docker environment poc-postgresql-pgTAP 
).


All works perfectly with this test file:

BEGIN;
   SELECT plan(1);

   CREATE EXTENSION IF NOT EXISTS postgres_fdw;
   DROP SERVER IF EXISTS db2 CASCADE;
   CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'db2', dbname 'db2');
   CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password 
'password');


   CREATE SCHEMA IF NOT EXISTS db2;

   IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;

   SELECT ok(
     (SELECT COUNT(host_id) FROM db2.hosts) = 1,
     'foobar'
   );

   -- ;
ROLLBACK;

Now, I would like to extract db2 initialization in separate file 
"/test/init.sql" with this content:


   CREATE EXTENSION IF NOT EXISTS postgres_fdw;
   DROP SERVER IF EXISTS db2 CASCADE;
   CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'db2', dbname 'db2');
   CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password 
'password');


   CREATE SCHEMA IF NOT EXISTS db2;

   IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;

Now, my test file is:

BEGIN;
   SELECT plan(1);

   \i /test/init.sql

   SELECT ok(
     (SELECT COUNT(host_id) FROM db2.hosts) = 1,
     'foobar'
   );

In log I see that "init.sql" file is loaded with success:

Running tests: /test/*.sql -v
/test/init.sql ... No subtests run

But I have this error:

ERROR:  user mapping not found for "db2"

Question: where is my mistake? How can I include some sql file in my test?


It is early morning here and I may be missing something, but I do not 
see where you actually create server db2. I do see you creating server kea.




Best regards,
Stéphane
--
Stéphane Klein >

blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane



--
Adrian Klaver
adrian.kla...@aklaver.com



jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
Good afternoon,

in PostgreSQL 10.3 I have the following table with a jsonb column:

#  \d words_moves;
 Table "public.words_moves"
 Column |   Type   | Collation | Nullable |
Default
+--+---+--+--
 mid| bigint   |   | not null |
nextval('words_moves_mid_seq'::regclass)
 action | text |   | not null |
 gid| integer  |   | not null |
 uid| integer  |   | not null |
 played | timestamp with time zone |   | not null |
 tiles  | jsonb|   |  |
 score  | integer  |   |  |
Indexes:
"words_moves_pkey" PRIMARY KEY, btree (mid)
Check constraints:
"words_moves_score_check" CHECK (score >= 0)
Foreign-key constraints:
"words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid) ON
DELETE CASCADE
"words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid) ON
DELETE CASCADE
Referenced by:
TABLE "words_daily" CONSTRAINT "words_daily_mid_fkey" FOREIGN KEY (mid)
REFERENCES words_moves(mid) ON DELETE CASCADE
TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE

Here are some records (please pardon the non-english chars):

# select * from words_moves where gid=609 limit 3;
-[ RECORD 1
]
mid| 1040
action | play
gid| 609
uid| 1192
played | 2018-03-02 10:13:57.943876+01
tiles  | [{"col": 3, "row": 7, "value": 2, "letter": "С"}, {"col": 4,
"row": 7, "value": 1, "letter": "О"}, {"col": 5, "row": 7, "value": 2,
"letter": "П"}, {"col": 6, "row": 7, "value": 0, "letter": "Л"}, {"col": 7,
"row": 7, "value": 3, "letter": "Я"}]
score  | 10
-[ RECORD 2
]
mid| 1041
action | play
gid| 609
uid| 7
played | 2018-03-02 10:56:58.72503+01
tiles  | [{"col": 3, "row": 8, "value": 2, "letter": "В"}, {"col": 3,
"row": 9, "value": 1, "letter": "И"}, {"col": 3, "row": 10, "value": 2,
"letter": "Т"}, {"col": 3, "row": 11, "value": 2, "letter": "К"}, {"col":
3, "row": 12, "value": 1, "letter": "А"}]
score  | 14
-[ RECORD 3
]
mid| 1043
action | play
gid| 609
uid| 1192
played | 2018-03-02 11:03:58.614094+01
tiles  | [{"col": 0, "row": 10, "value": 2, "letter": "С"}, {"col": 1,
"row": 10, "value": 2, "letter": "М"}, {"col": 2, "row": 10, "value": 1,
"letter": "О"}, {"col": 4, "row": 10, "value": 2, "letter": "Р"}]
score  | 13

I would like to get the length of the tiles array (because in my word game
7 played tiles mean +15 score bonus) - but that call fails for some reason:

#  select mid, jsonb_array_length(tiles) from words_moves where gid=609;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579

What am I doing wrong here please?

Regards
Alex


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 05:52 AM, Alexander Farber wrote:

Good afternoon,

in PostgreSQL 10.3 I have the following table with a jsonb column:

#  \d words_moves;
                                      Table "public.words_moves"
  Column |           Type           | Collation | Nullable | 
     Default

+--+---+--+--
  mid    | bigint                   |           | not null | 
nextval('words_moves_mid_seq'::regclass)

  action | text                     |           | not null |
  gid    | integer                  |           | not null |
  uid    | integer                  |           | not null |
  played | timestamp with time zone |           | not null |
  tiles  | jsonb                    |           |          |
  score  | integer                  |           |          |
Indexes:
     "words_moves_pkey" PRIMARY KEY, btree (mid)
Check constraints:
     "words_moves_score_check" CHECK (score >= 0)
Foreign-key constraints:
     "words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES 
words_games(gid) ON DELETE CASCADE
     "words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES 
words_users(uid) ON DELETE CASCADE

Referenced by:
     TABLE "words_daily" CONSTRAINT "words_daily_mid_fkey" FOREIGN KEY 
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE
     TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY 
(mid) REFERENCES words_moves(mid) ON DELETE CASCADE


Here are some records (please pardon the non-english chars):

# select * from words_moves where gid=609 limit 3;
-[ RECORD 1 
]

mid    | 1040
action | play
gid    | 609
uid    | 1192
played | 2018-03-02 10:13:57.943876+01
tiles  | [{"col": 3, "row": 7, "value": 2, "letter": "С"}, {"col": 4, 
"row": 7, "value": 1, "letter": "О"}, {"col": 5, "row": 7, "value": 2, 
"letter": "П"}, {"col": 6, "row": 7, "value": 0, "letter": "Л"}, {"col": 
7, "row": 7, "value": 3, "letter": "Я"}]

score  | 10
-[ RECORD 2 
]

mid    | 1041
action | play
gid    | 609
uid    | 7
played | 2018-03-02 10:56:58.72503+01
tiles  | [{"col": 3, "row": 8, "value": 2, "letter": "В"}, {"col": 3, 
"row": 9, "value": 1, "letter": "И"}, {"col": 3, "row": 10, "value": 2, 
"letter": "Т"}, {"col": 3, "row": 11, "value": 2, "letter": "К"}, 
{"col": 3, "row": 12, "value": 1, "letter": "А"}]

score  | 14
-[ RECORD 3 
]

mid    | 1043
action | play
gid    | 609
uid    | 1192
played | 2018-03-02 11:03:58.614094+01
tiles  | [{"col": 0, "row": 10, "value": 2, "letter": "С"}, {"col": 1, 
"row": 10, "value": 2, "letter": "М"}, {"col": 2, "row": 10, "value": 1, 
"letter": "О"}, {"col": 4, "row": 10, "value": 2, "letter": "Р"}]

score  | 13

I would like to get the length of the tiles array (because in my word 
game 7 played tiles mean +15 score bonus) - but that call fails for some 
reason:


#  select mid, jsonb_array_length(tiles) from words_moves where gid=609;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579

What am I doing wrong here please?


Are you sure all the values in tiles are correctly formatted because 
when I use jsonb_array_length with the provided data:


test=# select jsonb_array_length( '[{"col": 3, "row": 7, "value": 2, 
"letter": "С"}, {"col": 4, "row": 7, "value": 1, "letter": "О"}, {"col": 
5, "row": 7, "value": 2, "letter": "П"}, {"col": 6, "row": 7, "value": 
0, "letter": "Л"}, {"col": 7, "row": 7, "value": 3, "letter": "Я"}]');

 jsonb_array_length

  5

test=# select jsonb_array_length( '[{"col": 3, "row": 8, "value": 2, 
"letter": "В"}, {"col": 3, "row": 9, "value": 1, "letter": "И"}, {"col": 
3, "row": 10, "value": 2, "letter": "Т"}, {"col": 3, "row": 11, "value": 
2, "letter": "К"}, {"col": 3, "row": 12, "value": 1, "letter": "А"}]');

 jsonb_array_length

  5

test=# select jsonb_array_length('[{"col": 0, "row": 10, "value": 2, 
"letter": "С"}, {"col": 1, "row": 10, "value": 2, "letter": "М"}, 
{"col": 2, "row": 10, "value": 1, "letter": "О"}, {"col": 4, "row": 10, 
"value": 2, "letter": "Р"}]');
 jsonb_array_length 



 



  4

it works.

The error message would suggest there is data in tiles w

Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
Hi Adrian, thank you for the reply -

On Fri, Mar 2, 2018 at 3:05 PM, Adrian Klaver 
wrote:

> On 03/02/2018 05:52 AM, Alexander Farber wrote:
>
>>
>> in PostgreSQL 10.3 I have the following table with a jsonb column:
>>
>> #  \d words_moves;
>>   Table "public.words_moves"
>>   Column |   Type   | Collation | Nullable |
>> Default
>> +--+---+--+-
>> -
>>   mid| bigint   |   | not null |
>> nextval('words_moves_mid_seq'::regclass)
>>   action | text |   | not null |
>>   gid| integer  |   | not null |
>>   uid| integer  |   | not null |
>>   played | timestamp with time zone |   | not null |
>>   tiles  | jsonb|   |  |
>>   score  | integer  |   |  |
>> Indexes:
>>  "words_moves_pkey" PRIMARY KEY, btree (mid)
>> Check constraints:
>>  "words_moves_score_check" CHECK (score >= 0)
>> Foreign-key constraints:
>>  "words_moves_gid_fkey" FOREIGN KEY (gid) REFERENCES words_games(gid)
>> ON DELETE CASCADE
>>  "words_moves_uid_fkey" FOREIGN KEY (uid) REFERENCES words_users(uid)
>> ON DELETE CASCADE
>> Referenced by:
>>  TABLE "words_daily" CONSTRAINT "words_daily_mid_fkey" FOREIGN KEY
>> (mid) REFERENCES words_moves(mid) ON DELETE CASCADE
>>  TABLE "words_scores" CONSTRAINT "words_scores_mid_fkey" FOREIGN KEY
>> (mid) REFERENCES words_moves(mid) ON DELETE CASCADE
>>
>>
>> #  select mid, jsonb_array_length(tiles) from words_moves where gid=609;
>> ERROR:  22023: cannot get array length of a scalar
>> LOCATION:  jsonb_array_length, jsonfuncs.c:1579
>>
>> What am I doing wrong here please?
>>
>
> Are you sure all the values in tiles are correctly formatted because when
> I use jsonb_array_length with the provided data:
>
> test=# select jsonb_array_length( '[{"col": 3, "row": 7, "value": 2,
> "letter": "С"}, {"col": 4, "row": 7, "value": 1, "letter": "О"}, {"col": 5,
> "row": 7, "value": 2, "letter": "П"}, {"col": 6, "row": 7, "value": 0,
> "letter": "Л"}, {"col": 7, "row": 7, "value": 3, "letter": "Я"}]');
>  jsonb_array_length
> 
>   5
>
>
I fill that table with the following stored function (please pardon the
huge listing):

CREATE OR REPLACE FUNCTION words_play_game(
in_uid   integer,
in_gid   integer,
in_tiles jsonb
) RETURNS table (
out_uid  integer, -- the player to be notified
out_fcm  text,
out_apns text,
out_adm  text,
out_body text
) AS
$func$
DECLARE
_tile jsonb;
_letter   char;
_valueinteger;
_col  integer;
_row  integer;
_pos  integer;
_mid  bigint;
_totalinteger;
_hand_len integer;
_pile_len integer;
_move_len integer;
_pile char[];
_hand char[];
_letters  char[][];
_values   integer[][];
_opponent integer;
_finished timestamptz;
_reason   text;
_score1   integer;
_score2   integer;
BEGIN
IF EXISTS (SELECT 1 FROM words_users
WHERE uid = in_uid AND
banned_until > CURRENT_TIMESTAMP) THEN
RAISE EXCEPTION 'User % is banned', in_uid;
END IF;

-- fetch the 4 arrays (_hand, _pile, _letters, _values) for the
current game
SELECT
hand1,
pile,
letters,
values
INTO
_hand,
_pile,
_letters,
_values
FROM words_games WHERE
gid = in_gid AND
player1 = in_uid AND
-- game is not over yet
finished IS NULL AND
-- and it is first player's turn
(played1 IS NULL OR played1 < played2);

IF NOT FOUND THEN
SELECT
hand2,
pile,
letters,
values
INTO
_hand,
_pile,
_letters,
_values
FROM words_games WHERE
gid = in_gid AND
player2 = in_uid AND
-- game is not over yet
finished IS NULL AND
-- and it is second player's turn
(played2 IS NULL OR played2 < played1);
END IF;

IF NOT FOUND THEN
RAISE EXCEPTION 'Game % not found for user %', in_gid,
in_uid;

Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
And here is how I call my stored function -

https://gist.github.com/afarber/88a832a1b90a8940764ad69b2b761914

- why wouldn't it store a jsonb array in the tiles column of words_moves
table?

2018-03-02 15:29:42.644 CET [16693] LOG:  statement: DISCARD ALL
2018-03-02 15:29:42.644 CET [16693] LOG:  duration: 0.015 ms
2018-03-02 15:30:33.645 CET [16693] LOG:  statement: select 1
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.094 ms
2018-03-02 15:30:33.645 CET [16693] LOG:  statement: SET DateStyle='ISO';
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.050 ms
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.021 ms  parse
: SET extra_float_digits = 3
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.003 ms  bind
: SET extra_float_digits = 3
2018-03-02 15:30:33.645 CET [16693] LOG:  execute : SET
extra_float_digits = 3
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.006 ms
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.007 ms  parse
: SET application_name = 'PostgreSQL JDBC Driver'
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.002 ms  bind
: SET application_name = 'PostgreSQL JDBC Driver'
2018-03-02 15:30:33.645 CET [16693] LOG:  execute : SET
application_name = 'PostgreSQL JDBC Driver'
2018-03-02 15:30:33.645 CET [16693] LOG:  duration: 0.005 ms
2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.110 ms  parse
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns,
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, $2::int,
$3::jsonb)
2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns,
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, $2::int,
$3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 =
'609', $3 =
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'
2018-03-02 15:30:33.646 CET [16693] LOG:  execute : SELECT
out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns, out_adm  AS adm,
out_body AS body  FROM words_play_game($1::int, $2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 =
'609', $3 =
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'
2018-03-02 15:30:33.653 CET [16693] LOG:  duration: 7.567 ms


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:14 AM, Alexander Farber wrote:

Hi Adrian, thank you for the reply -




#  select mid, jsonb_array_length(tiles) from words_moves where
gid=609;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579

What am I doing wrong here please?


Are you sure all the values in tiles are correctly formatted because
when I use jsonb_array_length with the provided data:

test=# select jsonb_array_length( '[{"col": 3, "row": 7, "value": 2,
"letter": "С"}, {"col": 4, "row": 7, "value": 1, "letter": "О"},
{"col": 5, "row": 7, "value": 2, "letter": "П"}, {"col": 6, "row":
7, "value": 0, "letter": "Л"}, {"col": 7, "row": 7, "value": 3,
"letter": "Я"}]');
  jsonb_array_length

                   5


I fill that table with the following stored function (please pardon the 
huge listing):


The little gray cells are not awake enough to work through the below:) 
If it where me I would first confirm there was malformed data by looking 
at the data itself. If there are not that many records for gid = 609 
maybe a simple select  of tiles would be sufficient. Otherwise maybe a 
simple plpgsql function that loops through the records applying 
jsonb_array_length and raising a notice on the error. In any case the 
point is to identify the presence of malformed data and if present the 
nature of the malformation. That would help reverse engineer any issues 
with below.




CREATE OR REPLACE FUNCTION words_play_game(
     in_uid   integer,
     in_gid   integer,
     in_tiles jsonb
     ) RETURNS table (
     out_uid  integer, -- the player to be notified
     out_fcm  text,
     out_apns text,
     out_adm  text,
     out_body text
     ) AS




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
Hi Adrian, I 100% agree that nobody except me should debug my huge stored
function, but if you look at my PostgreSQL 10.3 log -

On Fri, Mar 2, 2018 at 3:31 PM, Adrian Klaver 
wrote:

>
> The little gray cells are not awake enough to work through the below:) If
> it where me I would first confirm there was malformed data by looking at
> the data itself. If there are not that many records for gid = 609 maybe a
> simple select  of tiles would be sufficient. Otherwise maybe a simple
> plpgsql function that loops through the records applying jsonb_array_length
> and raising a notice on the error. In any case the point is to identify the
> presence of malformed data and if present the nature of the malformation.
> That would help reverse engineer any issues with below.
>
>
>> CREATE OR REPLACE FUNCTION words_play_game(
>>  in_uid   integer,
>>  in_gid   integer,
>>  in_tiles jsonb
>>  ) RETURNS table (
>>  out_uid  integer, -- the player to be notified
>>  out_fcm  text,
>>  out_apns text,
>>  out_adm  text,
>>  out_body text
>>  ) AS
>>
>
2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.110 ms  parse
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns,
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, $2::int,
$3::jsonb)
2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns,
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, $2::int,
$3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 =
'609', $3 = '[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter"
:"И","row":4,"value":1}]'
2018-03-02 15:30:33.646 CET [16693] LOG:  execute : SELECT
out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns, out_adm  AS adm,
out_body AS body  FROM words_play_game($1::int, $2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 =
'609', $3 = '[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter"
:"И","row":4,"value":1}]'

I just pass as the 3rd argument in_tiles to my stored function:
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter"
:"И","row":4,"value":1}]'

and then take the in_tiles and store it unchanged in the words_moves table:

INSERT INTO words_moves (
action,
gid,
uid,
played,
tiles
) VALUES (
'play',
in_gid,
in_uid,
CURRENT_TIMESTAMP,
in_tiles
) RETURNING mid INTO STRICT _mid;

Does anybody happen to see what could I do wrong there?

Thank you for any hints
Alex

P.S: Here my stored fuinction:
https://gist.github.com/afarber/88a832a1b90a8940764ad69b2b761914
Here my table:
https://gist.github.com/afarber/06cc37114ff8dd14f05077f312904361
And here is how I call the stored function from Java:

   String SQL_PLAY_GAME=
"SELECT " +
"out_uid  AS uid,  " +
"out_fcm  AS fcm,  " +
"out_apns AS apns, " +
"out_adm  AS adm,  " +
"out_body AS body  " +
"FROM words_play_game(?::int, ?::int, ?::jsonb)";

private void handlePlay(int gid, String tiles) throws SQLException,
IOException {
LOG.info("handlePlay: {} -> {} {}", mUid, gid, tiles);
try (Connection db = DriverManager.getConnection(DATABASE_URL,
DATABASE_USER, DATABASE_PASS);
PreparedStatement st = db.prepareStatement(SQL_PLAY_GAME)) {
st.setInt(1, mUid);
st.setInt(2, gid);
st.setString(3, tiles);
runPlayerAction(st, gid);
}
}


Re: How can I include sql file in pgTAP unittest?

2018-03-02 Thread Stéphane Klein
2018-03-02 14:52 GMT+01:00 Adrian Klaver :

> On 03/02/2018 01:17 AM, Stéphane Klein wrote:
>
>> Hi,
>>
>> context: I would like to write UnitTest to test pgSQL triggers which use
>> postgres_fdw extension.
>> I use pgTAP  to write this UnitTest (I use this
>> Docker environment poc-postgresql-pgTAP > c-postgresql-pgTAP>).
>>
>>
>> All works perfectly with this test file:
>>
>> BEGIN;
>>SELECT plan(1);
>>
>>CREATE EXTENSION IF NOT EXISTS postgres_fdw;
>>DROP SERVER IF EXISTS db2 CASCADE;
>>CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host
>> 'db2', dbname 'db2');
>>CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password
>> 'password');
>>
>>CREATE SCHEMA IF NOT EXISTS db2;
>>
>>IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;
>>
>>SELECT ok(
>>  (SELECT COUNT(host_id) FROM db2.hosts) = 1,
>>  'foobar'
>>);
>>
>>-- ;
>> ROLLBACK;
>>
>> Now, I would like to extract db2 initialization in separate file
>> "/test/init.sql" with this content:
>>
>>CREATE EXTENSION IF NOT EXISTS postgres_fdw;
>>DROP SERVER IF EXISTS db2 CASCADE;
>>CREATE SERVER kea FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host
>> 'db2', dbname 'db2');
>>CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password
>> 'password');
>>
>>CREATE SCHEMA IF NOT EXISTS db2;
>>
>>IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;
>>
>> Now, my test file is:
>>
>> BEGIN;
>>SELECT plan(1);
>>
>>\i /test/init.sql
>>
>>SELECT ok(
>>  (SELECT COUNT(host_id) FROM db2.hosts) = 1,
>>  'foobar'
>>);
>>
>> In log I see that "init.sql" file is loaded with success:
>>
>> Running tests: /test/*.sql -v
>> /test/init.sql ... No subtests run
>>
>> But I have this error:
>>
>> ERROR:  user mapping not found for "db2"
>>
>> Question: where is my mistake? How can I include some sql file in my test?
>>
>
> It is early morning here and I may be missing something, but I do not see
> where you actually create server db2. I do see you creating server kea.


Sorry, it is not kea but db2 in my example. I did a mistake when I replace
the db name in my example.

You need to read:

  CREATE EXTENSION IF NOT EXISTS postgres_fdw;
  DROP SERVER IF EXISTS db2 CASCADE;
  CREATE SERVER db2 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'db2',
dbname 'db2');
  CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password
'password');

  CREATE SCHEMA IF NOT EXISTS db2;

  IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;


Re: Enforce primary key on every table during dev?

2018-03-02 Thread marcelo



On 02/03/2018 01:10 , Daevor The Devoted wrote:



On Fri, Mar 2, 2018 at 12:05 AM, Gavin Flower 
mailto:gavinflo...@archidevsys.co.nz>> 
wrote:


On 02/03/18 06:47, Daevor The Devoted wrote:


On Thu, Mar 1, 2018 at 2:07 PM, Rakesh Kumar
mailto:rakeshkumar...@aol.com>
>> wrote:


    >Adding a surrogate key to such a table just adds overhead,
    although that could be useful
    >in case specific rows need updating or deleting without also
    modifying the other rows with
    >that same data - normally, only insertions and selections
happen
    on such tables though,
    >and updates or deletes are absolutely forbidden - corrections
    happen by inserting rows with
    >an opposite transaction.

    I routinely add surrogate keys like serial col to a table
already
    having a nice candidate keys
    to make it easy to join tables.  SQL starts looking
ungainly when
    you have a 3 col primary
    key and need to join it with child tables.


I was always of the opinion that a mandatory surrogate key (as
you describe) is good practice.
Sure there may be a unique key according to business logic
(which may be consist of those "ungainly" multiple columns),
but guess what, business logic changes, and then you're
screwed! So using a primary key whose sole purpose is to be a
primary key makes perfect sense to me.


I once worked in a data base that had primary keys of at least 4
columns, all character fields, Primary Key could easily exceed 45
characters.  Parent child structure was at least 4 deep.

A child table only needs to know its parent, so there is no
logical need to include its parent and higher tables primary keys,
and then have to add a field to make the composite primary key
unique!  So if every table has int (or long) primary keys, then a
child only need a single field to reference its parent.

Some apparently safe Natural Keys might change unexpectedly.  A
few years aback there was a long thread on Natural versus
Surrogate keys - plenty of examples were using Natural Keys can
give grief when they had to be changed!  I think it best to
isolate a database from external changes as much as is practicable.

Surrogate keys also simply coding, be it in SQL or Java, or
whatever language is flavour of the month.  Also it makes setting
up testdata and debugging easier.

I almost invariably define a Surrogate key when I design tables.


Cheers,
Gavin


Thank you! I think you have expressed far more clearly what I have 
been trying to say. +10 to you.

Me too. Another +10.


---
El software de antivirus Avast ha analizado este correo electrónico en busca de 
virus.
https://www.avast.com/antivirus


Release upgarde failure

2018-03-02 Thread Pawan Sharma
Hi All,

Please let me know the reasons of release upgrade failure in PostgreSQL
which you guys faced while doing it.

Example: 9.5.1 to 9.5.2


Re: Release upgarde failure

2018-03-02 Thread David G. Johnston
On Fri, Mar 2, 2018 at 7:55 AM, Pawan Sharma  wrote:

> Hi All,
>
> Please let me know the reasons of release upgrade failure in PostgreSQL
> which you guys faced while doing it.
>
> Example: 9.5.1 to 9.5.2
>
>
​It is highly unlikely that you will encounter a failure when upgrading to
a new minor release.

Major releases are a different matter though usually the upgrade happens
just fine but if you aren't careful to also update your applications they
could start working if you are using features that have changed
incompatibly, though we do try to minimize that.

David J.
​


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:42 AM, Alexander Farber wrote:
Hi Adrian, I 100% agree that nobody except me should debug my huge 
stored function, but if you look at my PostgreSQL 10.3 log -




Which proves what has already been proven, that at least some of the 
data is correct. The issue is data that is not correct as evidenced by 
the error message:


select mid, jsonb_array_length(tiles) from words_moves where gid=609;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579

This is not going to get solved until you identify the 'bad' tiles data.


2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.110 ms  parse 
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns, 
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, 
$2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] LOG: duration: 0.058 ms  bind 
: SELECT out_uid  AS uid, out_fcm  AS fcm,  out_apns AS apns, 
out_adm  AS adm,  out_body AS body FROM words_play_game($1::int, 
$2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 = 
'609', $3 = 
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'
2018-03-02 15:30:33.646 CET [16693] LOG:  execute : SELECT 
out_uid AS uid,  out_fcm  AS fcm,  out_apns AS apns, out_adm  AS adm,  
out_body AS body  FROM words_play_game($1::int, $2::int, $3::jsonb)
2018-03-02 15:30:33.646 CET [16693] DETAIL:  parameters: $1 = '7', $2 = 
'609', $3 = 
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'


I just pass as the 3rd argument in_tiles to my stored function: 
'[{"col":0,"letter":"К","row":3,"value":2},{"col":0,"letter":"И","row":4,"value":1}]'


and then take the in_tiles and store it unchanged in the words_moves table:

     INSERT INTO words_moves (
     action,
     gid,
     uid,
     played,
     tiles
     ) VALUES (
     'play',
     in_gid,
     in_uid,
     CURRENT_TIMESTAMP,
     in_tiles
     ) RETURNING mid INTO STRICT _mid;

Does anybody happen to see what could I do wrong there?

Thank you for any hints
Alex

P.S: Here my stored fuinction: 
https://gist.github.com/afarber/88a832a1b90a8940764ad69b2b761914
     Here my table: 
https://gist.github.com/afarber/06cc37114ff8dd14f05077f312904361

     And here is how I call the stored function from Java:

    String SQL_PLAY_GAME    =
     "SELECT " +
     "out_uid  AS uid,  " +
     "out_fcm  AS fcm,  " +
     "out_apns AS apns, " +
     "out_adm  AS adm,  " +
     "out_body AS body  " +
     "FROM words_play_game(?::int, ?::int, ?::jsonb)";

     private void handlePlay(int gid, String tiles) throws SQLException, 
IOException {

     LOG.info("handlePlay: {} -> {} {}", mUid, gid, tiles);
     try (Connection db = DriverManager.getConnection(DATABASE_URL, 
DATABASE_USER, DATABASE_PASS);
     PreparedStatement st = 
db.prepareStatement(SQL_PLAY_GAME)) {

     st.setInt(1, mUid);
     st.setInt(2, gid);
     st.setString(3, tiles);
     runPlayerAction(st, gid);
     }
     }



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How to perform PITR when all of the logs won't fit on the drive

2018-03-02 Thread Jeff Janes
On Thu, Mar 1, 2018 at 2:28 PM, Tony Sullivan 
wrote:

> Hello,
>
> I have a situation where something was deleted from a database that
> shouldn't have been so I am having to take a base backup and perform a
> point-in-time-recovery. The problem I have is that the decompressed WAL
> files will not fit on the drive of the machine I am trying to do the
> restore on.
>
> I am wondering if I can arrange the WAL files by date and copy them to the
> directory where they belong and then copy another batch when those are
> restored or if I will need to find some other way of performing the
> recovery.
>
>
You could do that if your timing is right.  You need to either make sure
the next batch shows up before the first file in that batch is requested,
or have you restore command wait and retry rather than throw an error when
it asks for a file that does not exist.  But your restore command can copy
them from a network drive, or remotely with scp or rsync, and also
decompress them on the fly.  That seems simpler.

Another problem you might run into is that the restored WAL records are
retained in pg_xlog for two restart points before being removed, and until
fairly recent versions of PostgreSQL restart points were governed only by
checkpoint_timeout, and not by WAL volume.  So if your system restores far
faster than it took to generate the WAL in the first place, this could lead
to massive amounts of WAL kept in pg_xlog running you out of disk space.
So you should lower checkpoint_timeout for recovery to be much less than it
was in production.

Cheers,

Jeff


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:42 AM, Alexander Farber wrote:
Hi Adrian, I 100% agree that nobody except me should debug my huge 
stored function, but if you look at my PostgreSQL 10.3 log -




Another thought, in :

2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind 
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns, 
out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, 
$2::int, $3::jsonb)


where is $3::jsonb coming from?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: How can I include sql file in pgTAP unittest?

2018-03-02 Thread Adrian Klaver

On 03/02/2018 06:46 AM, Stéphane Klein wrote:




Sorry, it is not kea but db2 in my example. I did a mistake when I 
replace the db name in my example.


You need to read:

   CREATE EXTENSION IF NOT EXISTS postgres_fdw;
   DROP SERVER IF EXISTS db2 CASCADE;
   CREATE SERVER db2 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 
'db2', dbname 'db2');
   CREATE USER MAPPING FOR USER SERVER db2 OPTIONS (user 'db2', password 
'password');


   CREATE SCHEMA IF NOT EXISTS db2;

   IMPORT FOREIGN SCHEMA public FROM SERVER db2 INTO db2;


I don't know much about pgTap, but my first instinct would be to change:

BEGIN;
  SELECT plan(1);

  \i /test/init.sql

  SELECT ok(
(SELECT COUNT(host_id) FROM db2.hosts) = 1,
'foobar'
  );

to:


BEGIN;

  \i /test/init.sql

  SELECT plan(1);

  SELECT ok(
(SELECT COUNT(host_id) FROM db2.hosts) = 1,
'foobar'
  );


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread David G. Johnston
On Friday, March 2, 2018, Adrian Klaver  wrote:

> On 03/02/2018 06:42 AM, Alexander Farber wrote:
>
>>
>>
> 2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind
> : SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS apns,
> out_adm  AS adm,  out_body AS body  FROM words_play_game($1::int, $2::int,
> $3::jsonb)
>
> where is $3::jsonb coming from?
>
>
Java prepared statement I think, using setString.

Not at computer right now but what does:

'[1,2,3]'::text::jsonb result in compared to '[1,2,3]'::jsonb ?

Because the system think your array-looking string is actually just a
scalar that happens to have a leading [ and a trailing ]

David J.


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 10:04 AM, David G. Johnston wrote:
On Friday, March 2, 2018, Adrian Klaver > wrote:


On 03/02/2018 06:42 AM, Alexander Farber wrote:



2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind
: SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS
apns, out_adm  AS adm,  out_body AS body  FROM
words_play_game($1::int, $2::int, $3::jsonb)

where is $3::jsonb coming from?


Java prepared statement I think, using setString.


I was thinking more about the ultimate source of the data. The 
words_play_game() function, AFAICT, just passes the jsonb from input 
into the word_moves table. If that is the case the problem may occur 
further upstream where the jsonb array is actually built.




Not at computer right now but what does:

'[1,2,3]'::text::jsonb result in compared to '[1,2,3]'::jsonb ?

Because the system think your array-looking string is actually just a 
scalar that happens to have a leading [ and a trailing ]


David J.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
Good evening -

On Fri, Mar 2, 2018 at 7:11 PM, Adrian Klaver 
wrote:

>
>> 2018-03-02 15:30:33.646 CET [16693] LOG:  duration: 0.058 ms  bind
>> : SELECT out_uid  AS uid,  out_fcm  AS fcm,  out_apns AS
>> apns, out_adm  AS adm,  out_body AS body  FROM
>> words_play_game($1::int, $2::int, $3::jsonb)
>>
>> where is $3::jsonb coming from?
>>
>>
>> I was thinking more about the ultimate source of the data. The
> words_play_game() function, AFAICT, just passes the jsonb from input into
> the word_moves table. If that is the case the problem may occur further
> upstream where the jsonb array is actually built.
>
>
that argument comes over Websocket - either from my Android app,
or from my HTML5 game at https://slova.de/words/Words.js (just an array of
objects, stringified):

var tiles = [];
for (var i = boardTiles.length - 1;
i >= 0; i--) {
var tile = boardTiles[i];
tiles.push({
letter: tile.letter,
value:  tile.value,
col:tile.col,
row:tile.row
});
}

var play = {
social:  SOCIAL,
sid: SID,
auth:AUTH,
action:  'play',
gid: gid,
tiles:   tiles
};
ws.send(JSON.stringify(play));

Regards
Alex


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
# select * from words_moves where gid=609 limit 3;
-[ RECORD 1
]
mid| 1040
action | play
gid| 609
uid| 1192
played | 2018-03-02 10:13:57.943876+01
tiles  | [{"col": 3, "row": 7, "value": 2, "letter": "С"}, {"col": 4,
"row": 7, "value": 1, "letter": "О"}, {"col": 5, "row": 7, "value": 2,
"letter": "П"}, {"col": 6, "row": 7, "value": 0, "letter": "Л"}, {"col": 7,
"row": 7, "value": 3, "letter": "Я"}]
score  | 10
-[ RECORD 2
]
mid| 1041
action | play
gid| 609
uid| 7
played | 2018-03-02 10:56:58.72503+01
tiles  | [{"col": 3, "row": 8, "value": 2, "letter": "В"}, {"col": 3,
"row": 9, "value": 1, "letter": "И"}, {"col": 3, "row": 10, "value": 2,
"letter": "Т"}, {"col": 3, "row": 11, "value": 2, "letter": "К"}, {"col":
3, "row": 12, "value": 1, "letter": "А"}]
score  | 14
-[ RECORD 3
]
mid| 1043
action | play
gid| 609
uid| 1192
played | 2018-03-02 11:03:58.614094+01
tiles  | [{"col": 0, "row": 10, "value": 2, "letter": "С"}, {"col": 1,
"row": 10, "value": 2, "letter": "М"}, {"col": 2, "row": 10, "value": 1,
"letter": "О"}, {"col": 4, "row": 10, "value": 2, "letter": "Р"}]
score  | 13

#  select column_name, data_type from information_schema.columns where
table_name='words_moves';
 column_name |data_type
-+--
 mid | bigint
 action  | text
 gid | integer
 uid | integer
 played  | timestamp with time zone
 tiles   | jsonb
 score   | integer
(7 rows)

#  select jsonb_array_length(tiles) from words_moves where gid=609 limit 3;
 jsonb_array_length

  5
  5
  4
(3 rows)

BUT:

#  select jsonb_array_length(tiles) from words_moves where gid=609 ;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579


Which means only some data is bad, but how to find it please?


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
Oops, I've got strings there too - when swapping instead of playing tiles:

#  select * from words_moves where gid=609 and action <> 'play';
 mid  | action | gid | uid  |played |  tiles   |
score
--++-+--+---+--+---
 1063 | swap   | 609 | 1192 | 2018-03-02 14:13:24.684301+01 | "ТСНЦУЭ" |
 ¤
 1067 | swap   | 609 | 1192 | 2018-03-02 15:31:14.378474+01 | "ЕЯУЕФП" |
 ¤
 1068 | swap   | 609 |7 | 2018-03-02 15:52:07.629119+01 | "ОЕЕАУ"  |
 ¤
 1072 | swap   | 609 |7 | 2018-03-02 16:06:43.365012+01 | "ЕЕЫ"|
 ¤
 1076 | swap   | 609 |7 | 2018-03-02 16:20:18.933948+01 | "Ъ"  |
 ¤
(5 rows)

So my problem was simple. I am sorry for the numerous mails!

(it is just such a long chain - android/html5 - Jetty - PostgreSQL, so that
sometimes I am confused.
At least I am glad to have most of my logic as Pl/pgSQL right at the data)

Regards
Alex


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread David G. Johnston
On Fri, Mar 2, 2018 at 11:48 AM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> Oops, I've got strings there too - when swapping instead of playing tiles:
>
>
​You should probably add:

jsonb_array_length(tiles) > 0

as a check constraint on column
​
David J.


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 10:43 AM, Alexander Farber wrote:

# select * from words_moves where gid=609 limit 3;
-[ RECORD 1 
]

mid    | 1040
action | play
gid    | 609
uid    | 1192
played | 2018-03-02 10:13:57.943876+01
tiles  | [{"col": 3, "row": 7, "value": 2, "letter": "С"}, {"col": 4, 
"row": 7, "value": 1, "letter": "О"}, {"col": 5, "row": 7, "value": 2, 
"letter": "П"}, {"col": 6, "row": 7, "value": 0, "letter": "Л"}, {"col": 
7, "row": 7, "value": 3, "letter": "Я"}]

score  | 10
-[ RECORD 2 
]

mid    | 1041
action | play
gid    | 609
uid    | 7
played | 2018-03-02 10:56:58.72503+01
tiles  | [{"col": 3, "row": 8, "value": 2, "letter": "В"}, {"col": 3, 
"row": 9, "value": 1, "letter": "И"}, {"col": 3, "row": 10, "value": 2, 
"letter": "Т"}, {"col": 3, "row": 11, "value": 2, "letter": "К"}, 
{"col": 3, "row": 12, "value": 1, "letter": "А"}]

score  | 14
-[ RECORD 3 
]

mid    | 1043
action | play
gid    | 609
uid    | 1192
played | 2018-03-02 11:03:58.614094+01
tiles  | [{"col": 0, "row": 10, "value": 2, "letter": "С"}, {"col": 1, 
"row": 10, "value": 2, "letter": "М"}, {"col": 2, "row": 10, "value": 1, 
"letter": "О"}, {"col": 4, "row": 10, "value": 2, "letter": "Р"}]

score  | 13

#  select column_name, data_type from information_schema.columns where 
table_name='words_moves';

  column_name |        data_type
-+--
  mid         | bigint
  action      | text
  gid         | integer
  uid         | integer
  played      | timestamp with time zone
  tiles       | jsonb
  score       | integer
(7 rows)

#  select jsonb_array_length(tiles) from words_moves where gid=609 limit 3;
  jsonb_array_length

                   5
                   5
                   4
(3 rows)

BUT:

#  select jsonb_array_length(tiles) from words_moves where gid=609 ;
ERROR:  22023: cannot get array length of a scalar
LOCATION:  jsonb_array_length, jsonfuncs.c:1579


Which means only some data is bad, but how to find it please?


What is? :

select count(*) from words_moves where gid=609;

A simplistic approach would be:

select mid, jsonb_array_length(tiles) from words_moves where gid=609 
order by mid limit x;


where you  increment x until you trigger the error.

A more sophisticated approach would be to use plpgsql EXCEPTION handling:

https://www.postgresql.org/docs/10/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

and create a function that loops through:

select jsonb_array_length(tiles) from words_moves where gid=609 ;

and RAISES a NOTICE for each incorrect value along with its mid value.










--
Adrian Klaver
adrian.kla...@aklaver.com



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Alexander Farber
I see, thank you for your comments, David and Adrian.

In the "tiles" column actually save either the JSON array of tiles - when
the user plays them

Or a string (which is jsonb too) concatenated of letters - when the user
swaps the letters.

Maybe I should rethink my table structure (I just want to "log" all plays,
swaps, skips, resigns in the words_moves table)...

Or maybe I should always check for the "action" column first (acts as enum)
- before accessing "tiles" column

Regrads
Alex


Is there a continuous backup for pg ?

2018-03-02 Thread Gary M
Hi,

I have an unusual requirement for schema based, live backup of a 24/7
database processing 100K inserts/updates per hour. The data store is around
100TB.

The requirement is supporting an incremental backup of 10 minute windows.
Replication is not considered backup from malicious action.

Are there any best practices or solutions that can meet these requirements
?

b/r
gary


Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread David G. Johnston
On Fri, Mar 2, 2018 at 11:58 AM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> I see, thank you for your comments, David and Adrian.
>
> In the "tiles" column actually save either the JSON array of tiles - when
> the user plays them
>
> Or a string (which is jsonb too) concatenated of letters - when the user
> swaps the letters.
>
> Maybe I should rethink my table structure (I just want to "log" all plays,
> swaps, skips, resigns in the words_moves table)...
>
> Or maybe I should always check for the "action" column first (acts as
> enum) - before accessing "tiles" column
>
>
​A table constraint like:

CASE WHEN action = 'Play'
   THEN lengh < 0
   WHEN action = 'Swap'
   THEN ...
   ELSE false
END

Is perfectly fine - though having actual non-null values take on different
meanings based upon the enum is generally not a good idea.  I've been using
the above to enforce conditional not null constraints when I don't want to
implement explicit inheritance​.

David J.


Re: Is there a continuous backup for pg ?

2018-03-02 Thread Joshua D. Drake

On 03/02/2018 11:05 AM, Gary M wrote:

Hi,

I have an unusual requirement for schema based, live backup of a 24/7 
database processing 100K inserts/updates per hour. The data store is 
around 100TB.


The requirement is supporting an incremental backup of 10 minute 
windows.  Replication is not considered backup from malicious action.


Are there any best practices or solutions that can meet these 
requirements ?


A cold standby using PITR?

JD



b/r
gary



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Re: Is there a continuous backup for pg ?

2018-03-02 Thread Steve Atkins

> On Mar 2, 2018, at 11:05 AM, Gary M  wrote:
> 
> Hi,
> 
> I have an unusual requirement for schema based, live backup of a 24/7 
> database processing 100K inserts/updates per hour. The data store is around 
> 100TB.
> 
> The requirement is supporting an incremental backup of 10 minute windows.  
> Replication is not considered backup from malicious action.
> 
> Are there any best practices or solutions that can meet these requirements ? 

Sounds almost like you're looking for point-in-time recovery, which will let 
you restore an entire cluster to any time in the past (if you can afford the 
storage), using physical replication.

https://www.postgresql.org/docs/current/static/continuous-archiving.html

There are several sets of third-party tools that'll help with the setup, 
monitoring and other tooling. Look for omnipitr or pitrtools.

(I'd guess you could build something similar that would let you do logical 
recovery by recording changesets from a logical replication connection, but I 
don't know if anyone has put that together.)

Cheers,
  Steve


Re: Is there a continuous backup for pg ?

2018-03-02 Thread Gary M
Thanks Josh,

PITR is the option I was recommending. Project management diligence
dictates I need at least one other option from a different recommending
source, two other options optimally.

On Fri, Mar 2, 2018 at 12:16 PM, Joshua D. Drake 
wrote:

> On 03/02/2018 11:05 AM, Gary M wrote:
>
>> Hi,
>>
>> I have an unusual requirement for schema based, live backup of a 24/7
>> database processing 100K inserts/updates per hour. The data store is around
>> 100TB.
>>
>> The requirement is supporting an incremental backup of 10 minute
>> windows.  Replication is not considered backup from malicious action.
>>
>> Are there any best practices or solutions that can meet these
>> requirements ?
>>
>
> A cold standby using PITR?
>
> JD
>
>
>> b/r
>> gary
>>
>
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
>
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>


Re: Is there a continuous backup for pg ?

2018-03-02 Thread David Steele
Hi Gary,

On 3/2/18 2:05 PM, Gary M wrote:
> Hi,
> 
> I have an unusual requirement for schema based, live backup of a 24/7
> database processing 100K inserts/updates per hour. The data store is
> around 100TB.
> 
> The requirement is supporting an incremental backup of 10 minute
> windows.  Replication is not considered backup from malicious action.
> 
> Are there any best practices or solutions that can meet these
> requirements ? 
pgBackRest is specifically designed to handle very large clusters and
high WAL rates.  Backup, restore, and archiving can be run in parallel
to speed operations.  The new version 2 has been optimized to make
archive-push even faster than version 1 and we will be releasing an
optimized archive-get soon.

You would be best off achieving your 10-minute windows with daily
incremental backups and then recovery with PITR to the required time.
PITR allows you to specify any time for recovery.


Regards,
-- 
-David
da...@pgmasters.net



Re: jsonb_array_length: ERROR: 22023: cannot get array length of a scalar

2018-03-02 Thread Adrian Klaver

On 03/02/2018 10:58 AM, Alexander Farber wrote:

I see, thank you for your comments, David and Adrian.

In the "tiles" column actually save either the JSON array of tiles - 
when the user plays them


Or a string (which is jsonb too) concatenated of letters - when the user 
swaps the letters.


Maybe I should rethink my table structure (I just want to "log" all 
plays, swaps, skips, resigns in the words_moves table)...


Or make the string the value of an array:

[{"swap": "ТСНЦУЭ"}]

so you are not changing the inner JSON in the field.



Or maybe I should always check for the "action" column first (acts as 
enum) - before accessing "tiles" column


Regrads
Alex




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PQConsumeinput stuck on recv

2018-03-02 Thread Andres Freund
Hi,


On 2018-03-01 19:20:04 -0300, Andre Oliveira Freitas wrote:
> I was able to capture the backtrace again, now with libpq debugging symbols.
> 
> Thread 15 (Thread 0x7f8cec068700 (LWP 68)):
> #0  0x7f8d252a1d9b in __libc_recv (fd=150, buf=0x7f8cf0034410,
> n=16384, flags=623517083, flags@entry=0) at
> ../sysdeps/unix/sysv/linux/x86_64/recv.c:33
> #1  0x7f8d26689783 in recv (__flags=0, __n=,
> __buf=, __fd=) at
> /usr/include/x86_64-linux-gnu/bits/socket2.h:44
> #2  pqsecure_raw_read (conn=conn@entry=0x7f8cf001e390, ptr= out>, len=) at
> /build/postgresql-10-rIfDLC/postgresql-10-10.2/build/../src/interfaces/libpq/fe-secure.c:230
> #3  0x7f8d26689863 in pqsecure_read
> (conn=conn@entry=0x7f8cf001e390, ptr=, len= out>) at 
> /build/postgresql-10-rIfDLC/postgresql-10-10.2/build/../src/interfaces/libpq/fe-secure.c:217
> #4  0x7f8d266810ea in pqReadData (conn=conn@entry=0x7f8cf001e390)
> at 
> /build/postgresql-10-rIfDLC/postgresql-10-10.2/build/../src/interfaces/libpq/fe-misc.c:683
> #5  0x7f8d2667e6f2 in PQconsumeInput (conn=0x7f8cf001e390) at
> /build/postgresql-10-rIfDLC/postgresql-10-10.2/build/../src/interfaces/libpq/fe-exec.c:1704
> 
> In this case, I also checked the pg_stat_activity and this particular
> connection on pg server side was idle for 15 minutes. I killed it
> using pg_terminate_backend, and then somehow the application resumed
> execution.

There's something decidedly weird going on. Libpq always keeps the
connection in nonblocking mode internally. Blocking is implemented by
using select on the socket. So this should never block.

Is there any chance parts of your application changes the sockets
block-y-ness?  Is see your code is passing the socket around, so perhaps
that's happening somewhere outside of the file?


> As you can see, recv has received a non-zero value in flags

Well, no, not really. recv() has a 0 flags, it's just libc's internal
implementation that appears to be showing up weird afaict.


Greetings,

Andres Freund



Re: Is there a continuous backup for pg ?

2018-03-02 Thread Gary M
Thanks everyone !!

I think I have enough alternatives for this project. I'll start testing on
Monday..

have a good weekend
-g

On Fri, Mar 2, 2018 at 12:55 PM, David Steele  wrote:

> Hi Gary,
>
> On 3/2/18 2:05 PM, Gary M wrote:
> > Hi,
> >
> > I have an unusual requirement for schema based, live backup of a 24/7
> > database processing 100K inserts/updates per hour. The data store is
> > around 100TB.
> >
> > The requirement is supporting an incremental backup of 10 minute
> > windows.  Replication is not considered backup from malicious action.
> >
> > Are there any best practices or solutions that can meet these
> > requirements ?
> pgBackRest is specifically designed to handle very large clusters and
> high WAL rates.  Backup, restore, and archiving can be run in parallel
> to speed operations.  The new version 2 has been optimized to make
> archive-push even faster than version 1 and we will be releasing an
> optimized archive-get soon.
>
> You would be best off achieving your 10-minute windows with daily
> incremental backups and then recovery with PITR to the required time.
> PITR allows you to specify any time for recovery.
>
>
> Regards,
> --
> -David
> da...@pgmasters.net
>


Jira database won't start after disk filled up

2018-03-02 Thread Paul Costello
I have a database that wouldn't start due to the disk filling up back on
1/10, unbeknownst to us until 2/27.  This is jira, so it's critical data.
It appears jira was running in memory that entire time.

I needed to run pg_resetxlog -f in order to start the database.  It
started, but upon logging in I found the system catalog and some data to be
corrupt.

I was able to run a pg_dumpall on the database and restore it to an
re-initialized cluster.  However, there were 3 primary key errors during
the restore, because duplicate data got into the tables.

My hypothesis is that because of the system catalog corruption the primary
key uniqueness was not being enforced.  Not sure when this occurred though
1) right after the disk filled up 2) when I ran pg_resetxlog -f or 3) after
I ran pg_resetxlog and before I did the backup.  jira was still running
after I got it started and I waited a few hours to do the backup.  My guess
is the duplicate data got in there right after the disk filled up on 1/10
though.

We had a snapshot from 1/5 which is restored to production, such as it is.
But, they created another test vm for me to attempt to bring data back to
2/27.

Is there anything I can do short of pg_resetxlog -f to bring this database
back up more safely, and possibly avoid the duplicate data/primary key
errors?  It wouldn't start without the force option.  Should I simply shut
down jira, try pg_restxlog -f again and do the pg_dumpall immediately?

These are the errors I am currently seeing while trying to start the
database.

2018-03-02 11:01:06 CST LOG:  database system was interrupted; last known
up at 2018-01-10 12:19:01 CST
2018-03-02 11:01:06 CST LOG:  database system was not properly shut down;
automatic recovery in progress
2018-03-02 11:01:06 CST LOG:  redo starts at 36/B8556D58
2018-03-02 11:01:06 CST LOG:  incomplete startup packet
2018-03-02 11:01:07 CST FATAL:  the database system is starting up
...
2018-03-02 11:01:12 CST LOG:  incomplete startup packet
2018-03-02 11:01:29 CST FATAL:  the database system is starting up
...
2018-03-02 11:01:30 CST LOG:  record with zero length at 36/F754CBD8
2018-03-02 11:01:30 CST LOG:  redo done at 36/F754CBA8
2018-03-02 11:01:30 CST LOG:  last completed transaction was at log time
2018-02-26 17:55:43.238541-06

Any ideas or thoughts are appreciated.

Paul


Re: Is there a continuous backup for pg ?

2018-03-02 Thread Vick Khera
On Fri, Mar 2, 2018 at 2:34 PM, Gary M  wrote:

> Thanks Josh,
>
> PITR is the option I was recommending. Project management diligence
> dictates I need at least one other option from a different recommending
> source, two other options optimally.
>

File system snapshot on a robust system like ZFS. Rollback to any snapshot
almost trivially, and view any snapshot trivially.


Re: Jira database won't start after disk filled up

2018-03-02 Thread Vick Khera
On Fri, Mar 2, 2018 at 4:32 PM, Paul Costello  wrote:

> I have a database that wouldn't start due to the disk filling up back on
> 1/10, unbeknownst to us until 2/27.  This is jira, so it's critical data.
> It appears jira was running in memory that entire time.
>


Those first two sentences seem contradictory...


>
> I needed to run pg_resetxlog -f in order to start the database.  It
> started, but upon logging in I found the system catalog and some data to be
> corrupt.
>


Once you did this, fixing the data is really on you. Postgres has no way to
know what any of the data mean, nor how to decide what to keep and what to
toss on those conflicting rows with duplicate keys.

What I'd personally do is take your 1/5 backup, then merge in rows for
tickets and affiliated data from whatever you can recover in the current
database copy you have. Once that's done, run jira's built-in integrity
checker then do a full export to XML backup format. Finally re-import that
into a fresh jira so you know what's in there is consistent.  You'll
probably also have to cross-reference the attachments directory for missing
tickets and clean up those files (or synthesize tickets for them).

If your jira is configured to send email somewhere on ticket updates,
gathering those (even if it is in multiple people's mailboxes) and
recreating ticket info from them would also move you along.

You will lose some of your data because not all of it was written to disk.


Re: Jira database won't start after disk filled up

2018-03-02 Thread Paul Costello
Yes, contradictory.  We recently disabled email forwarding, so on 1/10 when
the disk filled up, we never received any alerts.  New position, so I was
completely unaware of this database, except on a conceptual level, until
Tuesday.

I think the best I can do is get this database back to 1/10.  In my first
restore attempt I noticed that the most recent jiraissue.resolutiondate was
1/10, so jira was running in memory the entire time and nothing was flushed
to the db - probably because it was corrupted, and possibly other reasons
related to neglect.

My hope is that I can get the db back to 1/10 and maybe we can, with
Atlassian's help, somehow sync the lucene files back to the db.  I don't
think I will have any postgres data to work with beyond 1/10.

Does this still sound do-able with that kind of data gap?

On Fri, Mar 2, 2018 at 3:44 PM, Vick Khera  wrote:

> On Fri, Mar 2, 2018 at 4:32 PM, Paul Costello  wrote:
>
>> I have a database that wouldn't start due to the disk filling up back on
>> 1/10, unbeknownst to us until 2/27.  This is jira, so it's critical data.
>> It appears jira was running in memory that entire time.
>>
>
>
> Those first two sentences seem contradictory...
>
>
>>
>> I needed to run pg_resetxlog -f in order to start the database.  It
>> started, but upon logging in I found the system catalog and some data to be
>> corrupt.
>>
>
>
> Once you did this, fixing the data is really on you. Postgres has no way
> to know what any of the data mean, nor how to decide what to keep and what
> to toss on those conflicting rows with duplicate keys.
>
> What I'd personally do is take your 1/5 backup, then merge in rows for
> tickets and affiliated data from whatever you can recover in the current
> database copy you have. Once that's done, run jira's built-in integrity
> checker then do a full export to XML backup format. Finally re-import that
> into a fresh jira so you know what's in there is consistent.  You'll
> probably also have to cross-reference the attachments directory for missing
> tickets and clean up those files (or synthesize tickets for them).
>
> If your jira is configured to send email somewhere on ticket updates,
> gathering those (even if it is in multiple people's mailboxes) and
> recreating ticket info from them would also move you along.
>
> You will lose some of your data because not all of it was written to disk.
>
>


Re: Is there a continuous backup for pg ?

2018-03-02 Thread Gary M
Vick,

I would love to use ZFS, this project requires RHEL/SeLinux MLS. Without
MLS and RH support, ZFS is a no-go.

On Fri, Mar 2, 2018 at 2:34 PM, Vick Khera  wrote:

> On Fri, Mar 2, 2018 at 2:34 PM, Gary M  wrote:
>
>> Thanks Josh,
>>
>> PITR is the option I was recommending. Project management diligence
>> dictates I need at least one other option from a different recommending
>> source, two other options optimally.
>>
>
> File system snapshot on a robust system like ZFS. Rollback to any snapshot
> almost trivially, and view any snapshot trivially.
>


SQL syntax

2018-03-02 Thread Dale Seaburg
My mind is drawing a blank.  Not sure where to go to find the answer.  
Here is the statement in C#:


sSQL = "SELECT \"Image_Filename\" FROM \"Instruments\" WHERE ";

To finish off the WHERE clause, I need to look at the first 2 letters, 
like "D:".  My question is how do I specify in the WHERE clause, to look 
at the first 2 characters in the Image_Filename column?  What is the 
correct SQL syntax for looking at just a portion of a column?


I feel so dumb...  This can't be that hard, but I am exhausted and 
running out of ideas.  I need someone to give me a boost!  ;-)


Thanks,
Dale.



Re: SQL syntax

2018-03-02 Thread Adrian Klaver

On 03/02/2018 04:36 PM, Dale Seaburg wrote:
My mind is drawing a blank.  Not sure where to go to find the answer. 
Here is the statement in C#:


sSQL = "SELECT \"Image_Filename\" FROM \"Instruments\" WHERE ";

To finish off the WHERE clause, I need to look at the first 2 letters, 
like "D:".  My question is how do I specify in the WHERE clause, to look 
at the first 2 characters in the Image_Filename column?  What is the 
correct SQL syntax for looking at just a portion of a column?


SELECT "Image_Filename" FROM "Instruments" WHERE "ImageFilename" LIKE 'D:%';

If you want case insensitive then ILIKE. For more info:
https://www.postgresql.org/docs/10/static/functions-matching.html#FUNCTIONS-LIKE



I feel so dumb...  This can't be that hard, but I am exhausted and 
running out of ideas.  I need someone to give me a boost!  ;-)


Thanks,
Dale.





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: SQL syntax

2018-03-02 Thread Ken Tanzer
On Fri, Mar 2, 2018 at 4:41 PM, Adrian Klaver 
wrote:

> On 03/02/2018 04:36 PM, Dale Seaburg wrote:
>
>> My mind is drawing a blank.  Not sure where to go to find the answer.
>> Here is the statement in C#:
>>
>> sSQL = "SELECT \"Image_Filename\" FROM \"Instruments\" WHERE ";
>>
>> To finish off the WHERE clause, I need to look at the first 2 letters,
>> like "D:".  My question is how do I specify in the WHERE clause, to look at
>> the first 2 characters in the Image_Filename column?  What is the correct
>> SQL syntax for looking at just a portion of a column?
>>
>
> SELECT "Image_Filename" FROM "Instruments" WHERE "ImageFilename" LIKE
> 'D:%';
>
> If you want case insensitive then ILIKE. For more info:
> https://www.postgresql.org/docs/10/static/functions-matching
> .html#FUNCTIONS-LIKE
>
>
It's not clear what kind of test you're trying to apply to those first two
characters.  If you want to pull them out to use in an expression (e.g., if
you want to see if they are greater or less than something else), you can
use SUBSTRING or LEFT.

SELECT SUBSTRING('abc' FROM 1 FOR 2),LEFT('abc',2),LEFT('abc',2)<'aa'
AS is_less,LEFT('abc',2)>'aa' AS is_more;

 substring | left | is_less | is_more
---+--+-+-
 ab| ab   | f   | t
(1 row)

Cheers,
Ken



> --
>
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: SQL syntax

2018-03-02 Thread Tom Lane
Adrian Klaver  writes:
> On 03/02/2018 04:36 PM, Dale Seaburg wrote:
>> To finish off the WHERE clause, I need to look at the first 2 letters, 
>> like "D:".  My question is how do I specify in the WHERE clause, to look 
>> at the first 2 characters in the Image_Filename column?  What is the 
>> correct SQL syntax for looking at just a portion of a column?

> SELECT "Image_Filename" FROM "Instruments" WHERE "ImageFilename" LIKE 'D:%';

Another way is to use the substring() function:

SELECT "Image_Filename" FROM "Instruments"
  WHERE substring("ImageFilename", 1, 2) = 'D:';

or if you want to use the SQL committee's COBOLish syntax:

SELECT "Image_Filename" FROM "Instruments"
  WHERE substring("ImageFilename" FROM 1 FOR 2) = 'D:';

Depending on what you're doing, either the pattern-match way or the
substring way might be more convenient.  The performance implications
are different too, though that won't matter to you unless you're dealing
with so much data that you want to create a specialized index to make
queries of this form faster.

regards, tom lane



Re: SQL syntax

2018-03-02 Thread Rob Sargent


> On Mar 2, 2018, at 5:41 PM, Adrian Klaver  wrote:
> 
>> On 03/02/2018 04:36 PM, Dale Seaburg wrote:
>> My mind is drawing a blank.  Not sure where to go to find the answer. Here 
>> is the statement in C#:
>> sSQL = "SELECT \"Image_Filename\" FROM \"Instruments\" WHERE ";
>> To finish off the WHERE clause, I need to look at the first 2 letters, like 
>> "D:".  My question is how do I specify in the WHERE clause, to look at the 
>> first 2 characters in the Image_Filename column?  What is the correct SQL 
>> syntax for looking at just a portion of a column?
> 
> SELECT "Image_Filename" FROM "Instruments" WHERE "ImageFilename" LIKE 'D:%';
> 
> If you want case insensitive then ILIKE. For more info:
> https://www.postgresql.org/docs/10/static/functions-matching.html#FUNCTIONS-LIKE
> 
>> I feel so dumb...  This can't be that hard, but I am exhausted and running 
>> out of ideas.  I need someone to give me a boost!  ;-)
>> Thanks,
>> Dale.
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
Use the tilda, Luke. 
filename ~ ‘^D:’