Problems modifyiong view

2019-11-14 Thread stan
I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
getting the following error:

ERROR:  cannot change name of view column "descrip" to "contact_person_1"

I suppose  I can drop the view, and recreate it, but that seems to indicate
that the create or replace functionality is not functioning the way I would
expect.

Am I missing something here?

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Problems modifyiong view

2019-11-14 Thread Igor Korot
Hi,

On Thu, Nov 14, 2019 at 7:54 AM stan  wrote:
>
> I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
> getting the following error:
>
> ERROR:  cannot change name of view column "descrip" to "contact_person_1"
>
> I suppose  I can drop the view, and recreate it, but that seems to indicate
> that the create or replace functionality is not functioning the way I would
> expect.
>
> Am I missing something here?

What version?
What OS server is running on?
What client are you running?

Thank you.

>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>




Re: Problems modifyiong view

2019-11-14 Thread Adrian Klaver

On 11/14/19 5:53 AM, stan wrote:

I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
getting the following error:

ERROR:  cannot change name of view column "descrip" to "contact_person_1"

I suppose  I can drop the view, and recreate it, but that seems to indicate
that the create or replace functionality is not functioning the way I would
expect.

Am I missing something here?



https://www.postgresql.org/docs/11/sql-createview.html

"CREATE OR REPLACE VIEW is similar, but if a view of the same name 
already exists, it is replaced. The new query must generate the same 
columns that were generated by the existing view query (that is, the 
same column names in the same order and with the same data types), but 
it may add additional columns to the end of the list. The calculations 
giving rise to the output columns may be completely different."




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Problems modifyiong view

2019-11-14 Thread Tom Lane
Adrian Klaver  writes:
> On 11/14/19 5:53 AM, stan wrote:
>> I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
>> getting the following error:
>> ERROR:  cannot change name of view column "descrip" to "contact_person_1"
>> Am I missing something here?

> https://www.postgresql.org/docs/11/sql-createview.html

> "CREATE OR REPLACE VIEW is similar, but if a view of the same name 
> already exists, it is replaced. The new query must generate the same 
> columns that were generated by the existing view query (that is, the 
> same column names in the same order and with the same data types), but 
> it may add additional columns to the end of the list. The calculations 
> giving rise to the output columns may be completely different."

Yeah, the important point being that you can only add columns at the
*end* of the view, just like you can only add table columns at the
end.  The same-names-and-types check is intended to catch simple
mistakes in this area.

If you actually want to rename an existing view column, use
ALTER TABLE ... RENAME COLUMN ... for that.  (We probably ought
to offer an ALTER VIEW spelling of that, but we don't ATM.
ALTER TABLE works though.)

regards, tom lane




RE: ERROR: COPY escape must be a single one-byte character (multi-delimiter appears to work on Postgres 9.0 but does not on Postgres 9.2)

2019-11-14 Thread Kevin Brannen
>Adrian Klaver wrote:
>On 11/13/19 4:40 PM, Brandon Ragland wrote:
>> Hello,
>>
>> I have a Talend enterprise job that loads data into a PostgreSQL
>> database via the COPY command. When migrating to a new server this
>> command fails with the following error message:
>> org.postgresql.util.PSQLException:ERROR: COPY escape must be a single
>> one-byte character
>
>Does the Postgres log offer anymore information?
>
>Can you crank up the error level to get more info?
>
>Are the encodings for the Postgres server/OS different ?


What is the COPY command you're using?

To extend Adrian's good questions, have you really examined the line in
question carefully with a tool that will show you the true characters being
used? You didn't show the COPY command, but to me, this sounds like you
have a "fancy" character happening. The place I most often see this is
with double quotes:

U+0022  normal ASCII "(1 byte)
U+201c  left double quote (2 bytes!)

Depending on the font used, those 2 can look very much alike and
there are far more characters that can be mixed up like this. The
insidious ones for me are the whitespace chars, e.g. 0xa0. I find this
happens frequently when the source of a copy-paste is from another
program, e.g. MS-Word; our ticket system did this for awhile too. :(

HTH,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Problems modifyiong view

2019-11-14 Thread Adrian Klaver

On 11/14/19 7:12 AM, Tom Lane wrote:

Adrian Klaver  writes:

On 11/14/19 5:53 AM, stan wrote:

I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
getting the following error:
ERROR:  cannot change name of view column "descrip" to "contact_person_1"
Am I missing something here?



https://www.postgresql.org/docs/11/sql-createview.html



"CREATE OR REPLACE VIEW is similar, but if a view of the same name
already exists, it is replaced. The new query must generate the same
columns that were generated by the existing view query (that is, the
same column names in the same order and with the same data types), but
it may add additional columns to the end of the list. The calculations
giving rise to the output columns may be completely different."


Yeah, the important point being that you can only add columns at the
*end* of the view, just like you can only add table columns at the
end.  The same-names-and-types check is intended to catch simple
mistakes in this area.

If you actually want to rename an existing view column, use
ALTER TABLE ... RENAME COLUMN ... for that.  (We probably ought
to offer an ALTER VIEW spelling of that, but we don't ATM.
ALTER TABLE works though.)


Alright, I'm missing something here:

test=# \d up_test
  Table "public.up_test"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 id | integer |   |  |
 col1   | boolean |   |  |
 col_2  | integer |


ALTER TABLE
test=# \d+ test_view
  View "public.test_view"
 Column |  Type   | Collation | Nullable | Default | Storage | Description
+-+---+--+-+-+-
 id | integer |   |  | | plain   |
 col1   | boolean |   |  | | plain   |
 col_2  | integer |   |  | | plain   |
View definition:
 SELECT up_test.id,
up_test.col1,
up_test.col_2
   FROM up_test;



test=# alter table up_test rename COLUMN col1 to col_1;
ALTER TABLE
test=# \d up_test
  Table "public.up_test"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 id | integer |   |  |
 col_1  | boolean |   |  |
 col_2  | integer |

test=# \d+ test_view
  View "public.test_view"
 Column |  Type   | Collation | Nullable | Default | Storage | Description
+-+---+--+-+-+-
 id | integer |   |  | | plain   |
 col1   | boolean |   |  | | plain   |
 col_2  | integer |   |  | | plain   |
View definition:
 SELECT up_test.id,
up_test.col_1 AS col1,
up_test.col_2
   FROM up_test;


test=# create or replace view test_view as select id, col_1 , col_2 from 
up_test;

ERROR:  cannot change name of view column "col1" to "col_1"

The underlying table column name changes, but the view column is aliased 
to the original column name.




regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Problems modifyiong view

2019-11-14 Thread Tom Lane
Adrian Klaver  writes:
> On 11/14/19 7:12 AM, Tom Lane wrote:
>> If you actually want to rename an existing view column, use
>> ALTER TABLE ... RENAME COLUMN ... for that.

> Alright, I'm missing something here:

> test=# alter table up_test rename COLUMN col1 to col_1;
> ALTER TABLE
> ...
> test=# \d+ test_view
>View "public.test_view"
>   Column |  Type   | Collation | Nullable | Default | Storage | Description
> +-+---+--+-+-+-
>   id | integer |   |  | | plain   |
>   col1   | boolean |   |  | | plain   |
>   col_2  | integer |   |  | | plain   |
> View definition:
>   SELECT up_test.id,
>  up_test.col_1 AS col1,
>  up_test.col_2
> FROM up_test;

Right, at this point the names of the underlying column and the view
column are out of sync, so the view definition must incorporate a
renaming AS to be correct.

> test=# create or replace view test_view as select id, col_1 , col_2 from 
> up_test;
> ERROR:  cannot change name of view column "col1" to "col_1"

This is attempting to change the view output column's name to col_1
(since you didn't write "AS col1"), and it won't let you.  You could
do "ALTER TABLE test_view RENAME COLUMN col1 TO col_1" to put things
back in sync, if that's what you want.

regards, tom lane




Re: INOUT PARAMETERS WITH RETURN TABLES IN FUNCTION

2019-11-14 Thread Adrian Klaver

On 11/13/19 11:58 PM, İlyas Derse wrote:

Please reply to list also.
Ccing list.


I want to do like this ;


I can't make sense of the below. I don't see where "x" and "y" are used 
in the function, unless they supposed to be "id" and "filesize". I have 
no idea what the QUERY is doing? Pretty sure you don't want an 
unconstrained select on a table. Can you provide a working example of 
what you are doing in SQL Server with sample output from same?




|CREATEorREPLACE FUNCTIONpublic."test"(INOUT "x"integer,INOUT 
"y"text)RETURNS TABLE("id"integer,"filesize"character 
varying(36))AS$$BEGINRETURNQUERY 
SELECT*FROMpublic."tbl_employees";END;$$LANGUAGE plpgsql;|


|I need to call table and inout parameters together at another place.|


Adrian Klaver >, 8 Kas 2019 Cum, 18:25 tarihinde 
şunu yazdı:


On 11/8/19 12:18 AM, İlyas Derse wrote:
 > I'm trying to migration to PostgreSql from SQL Server.  I have
Stored
 > Procedures what have output parameters and return tables. How can
i do
 > both together.

Can you show an example of a SQL Server procedure that demonstrates
what
you want to achieve?

 >
 > Its like ;
 >
 > CREATE or replace FUNCTION public."test" (INOUT "x" integer,
INOUT "y"
 > character varying(36))
 >
 > RETURNS TABLE  (
 > "id" integer,
 > "filesize" character varying(36)
 > )
 > AS $$
 >   BEGIN
 >    x=6;
 > RETURN QUERY
 > SELECT * FROM    public."tbl_employees" ;
 >
 > END;
 > $$ LANGUAGE plpgsql;
 >
 > I can not create that because of inout parameters.
 > Another place;
 >
 > do $$
 > DECLARE b integer = 1;
 > DECLARE d integer = 2 ;
 > BEGIN
 >    select * from public."test"();
 > END;
 > $$;
 >
 >   Anybody have an idea ?


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Problems modifyiong view

2019-11-14 Thread Adrian Klaver

On 11/14/19 7:45 AM, Tom Lane wrote:

Adrian Klaver  writes:

On 11/14/19 7:12 AM, Tom Lane wrote:

If you actually want to rename an existing view column, use
ALTER TABLE ... RENAME COLUMN ... for that.



Alright, I'm missing something here:



test=# alter table up_test rename COLUMN col1 to col_1;
ALTER TABLE
...
test=# \d+ test_view
View "public.test_view"
   Column |  Type   | Collation | Nullable | Default | Storage | Description
+-+---+--+-+-+-
   id | integer |   |  | | plain   |
   col1   | boolean |   |  | | plain   |
   col_2  | integer |   |  | | plain   |
View definition:
   SELECT up_test.id,
  up_test.col_1 AS col1,
  up_test.col_2
 FROM up_test;


Right, at this point the names of the underlying column and the view
column are out of sync, so the view definition must incorporate a
renaming AS to be correct.


test=# create or replace view test_view as select id, col_1 , col_2 from
up_test;
ERROR:  cannot change name of view column "col1" to "col_1"


This is attempting to change the view output column's name to col_1
(since you didn't write "AS col1"), and it won't let you.  You could
do "ALTER TABLE test_view RENAME COLUMN col1 TO col_1" to put things
back in sync, if that's what you want.


Aah. You do ALTER TABLE on the view, that was the part I missed.

Yeah an ALTER VIEW ... version of that would be more intuitive.



regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Problems modifyiong view

2019-11-14 Thread Adrian Klaver

On 11/14/19 7:54 AM, Adrian Klaver wrote:

On 11/14/19 7:45 AM, Tom Lane wrote:

Adrian Klaver  writes:

On 11/14/19 7:12 AM, Tom Lane wrote:

If you actually want to rename an existing view column, use
ALTER TABLE ... RENAME COLUMN ... for that.



Alright, I'm missing something here:



test=# alter table up_test rename COLUMN col1 to col_1;
ALTER TABLE
...
test=# \d+ test_view
    View "public.test_view"
   Column |  Type   | Collation | Nullable | Default | Storage | 
Description
+-+---+--+-+-+- 


   id | integer |   |  | | plain   |
   col1   | boolean |   |  | | plain   |
   col_2  | integer |   |  | | plain   |
View definition:
   SELECT up_test.id,
  up_test.col_1 AS col1,
  up_test.col_2
 FROM up_test;


Right, at this point the names of the underlying column and the view
column are out of sync, so the view definition must incorporate a
renaming AS to be correct.


test=# create or replace view test_view as select id, col_1 , col_2 from
up_test;
ERROR:  cannot change name of view column "col1" to "col_1"


This is attempting to change the view output column's name to col_1
(since you didn't write "AS col1"), and it won't let you.  You could
do "ALTER TABLE test_view RENAME COLUMN col1 TO col_1" to put things
back in sync, if that's what you want.


Aah. You do ALTER TABLE on the view, that was the part I missed.

Yeah an ALTER VIEW ... version of that would be more intuitive.


Or a link back to the ALTER TABLE section in the CREATE OR REPLACE VIEW 
portion of:


https://www.postgresql.org/docs/11/sql-createview.html





    regards, tom lane







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Problems modifyiong view

2019-11-14 Thread stan
On Thu, Nov 14, 2019 at 06:31:48AM -0800, Adrian Klaver wrote:
> On 11/14/19 5:53 AM, stan wrote:
> > I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
> > getting the following error:
> > 
> > ERROR:  cannot change name of view column "descrip" to "contact_person_1"
> > 
> > I suppose  I can drop the view, and recreate it, but that seems to indicate
> > that the create or replace functionality is not functioning the way I would
> > expect.
> > 
> > Am I missing something here?
> > 
> 
> https://www.postgresql.org/docs/11/sql-createview.html
> 
> "CREATE OR REPLACE VIEW is similar, but if a view of the same name already
> exists, it is replaced. The new query must generate the same columns that
> were generated by the existing view query (that is, the same column names in
> the same order and with the same data types), but it may add additional
> columns to the end of the list. The calculations giving rise to the output
> columns may be completely different."


OK, so I see this is documented behavior. This makes it a "feature", not a
"bug" correct :-)

Thanks.
-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Problems modifyiong view

2019-11-14 Thread stan
On Thu, Nov 14, 2019 at 10:12:22AM -0500, Tom Lane wrote:
> Adrian Klaver  writes:
> > On 11/14/19 5:53 AM, stan wrote:
> >> I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
> >> getting the following error:
> >> ERROR:  cannot change name of view column "descrip" to "contact_person_1"
> >> Am I missing something here?
> 
> > https://www.postgresql.org/docs/11/sql-createview.html
> 
> > "CREATE OR REPLACE VIEW is similar, but if a view of the same name 
> > already exists, it is replaced. The new query must generate the same 
> > columns that were generated by the existing view query (that is, the 
> > same column names in the same order and with the same data types), but 
> > it may add additional columns to the end of the list. The calculations 
> > giving rise to the output columns may be completely different."
> 
> Yeah, the important point being that you can only add columns at the
> *end* of the view, just like you can only add table columns at the
> end.  The same-names-and-types check is intended to catch simple
> mistakes in this area.
> 
> If you actually want to rename an existing view column, use
> ALTER TABLE ... RENAME COLUMN ... for that.  (We probably ought
> to offer an ALTER VIEW spelling of that, but we don't ATM.
> ALTER TABLE works though.)

Thanks.

I am just starting to explore this area at all. I thought I just added a
column to a table, and did not realize that it was apended as the last
column. Don't see why I care in that case, though.

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: Postgresql RDS DB Latency Chossing Hash join Plan

2019-11-14 Thread github kran
Hello Team,



Hope everyone is doing great !!.


*Background*

We use PostgreSQL Version 10.6 version and heavily use PostgreSQL for our
day to day activities to write and read data. We have 2 clusters running
PostgreSQL engine , one cluster

keeps data up to 60 days and another cluster retains data beyond 1 year.
The data is partitioned close to a week( ~evry 5 days a partition) and we
have around 5 partitions per month per each table and we have 2 tables
primarily so that will be 10 tables a week. So in the cluster-1 we have
around  20 partitions and in cluster-2 we have around 160 partitions ( data
from 2018). We also want to keep the data for up to 2 years in the
cluster-2 to serve the data needs of the customer and so far we reached
upto 1 year of maintaining this data.



*Current activity*

We have a custom weekly migration DB script job that moves data from 1
cluster to another cluster what it does is the below things.

1) COPY command to copy the data from cluster-1 and split that data into
binary files

2) Writing the binary data into the cluster-2 table

3) Creating indexes after the data is copied.



*Problem what we have right now. *

When the migration activity runs(weekly) from past 2 times , we saw the
cluster read replica instance has restarted as it fallen behind the
master(writer instance). Everything

after that worked seamlessly but we want to avoid the replica getting
restarted. To avoid from restart we started doing smaller binary files and
copy those files to the cluster-2

instead of writing 1 big file of 450 million records. We were successful in
the recent migration as the reader instance didn’t restart after we split 1
big file into multiple files to copy the data over but did restart after
the indexes are created on the new table as it could be write intensive.



*DB parameters set on migration job*

work_mem set to 8 GB  and maintenace_work_mem=32 GB.

Indexes per table = 3

total indexes for 2 tables = 5



*DB size*

Cluster-2 = 8.6 TB

Cluster-1 = 3.6 TB

Peak Table relational rows = 400 - 480 million rows

Average table relational rows = 300 - 350 million rows.

Per table size = 90 -95 GB , per table index size is about 45 GB



*Questions*

1) Can we decrease the maintenace_work_mem to 16 GB and will it slow down
the writes to the cluster , with that the reader instance can sync the data
slowly ?.

2) Based on the above use case what are your recommendations to keep the
data longer up to 2 years ?

3) What other recommendations you recommend ?.





Appreciate your replies.

THanks
githubkran

>


PostGreSQL Replication and question on maintenance

2019-11-14 Thread github kran
sorry changing the subject line.

On Thu, Nov 14, 2019 at 11:21 AM github kran  wrote:

> Hello Team,
>
>
>
> Hope everyone is doing great !!.
>
>
> *Background*
>
> We use PostgreSQL Version 10.6 version and heavily use PostgreSQL for our
> day to day activities to write and read data. We have 2 clusters running
> PostgreSQL engine , one cluster
>
> keeps data up to 60 days and another cluster retains data beyond 1 year.
> The data is partitioned close to a week( ~evry 5 days a partition) and we
> have around 5 partitions per month per each table and we have 2 tables
> primarily so that will be 10 tables a week. So in the cluster-1 we have
> around  20 partitions and in cluster-2 we have around 160 partitions ( data
> from 2018). We also want to keep the data for up to 2 years in the
> cluster-2 to serve the data needs of the customer and so far we reached
> upto 1 year of maintaining this data.
>
>
>
> *Current activity*
>
> We have a custom weekly migration DB script job that moves data from 1
> cluster to another cluster what it does is the below things.
>
> 1) COPY command to copy the data from cluster-1 and split that data into
> binary files
>
> 2) Writing the binary data into the cluster-2 table
>
> 3) Creating indexes after the data is copied.
>
>
>
> *Problem what we have right now. *
>
> When the migration activity runs(weekly) from past 2 times , we saw the
> cluster read replica instance has restarted as it fallen behind the
> master(writer instance). Everything
>
> after that worked seamlessly but we want to avoid the replica getting
> restarted. To avoid from restart we started doing smaller binary files and
> copy those files to the cluster-2
>
> instead of writing 1 big file of 450 million records. We were successful
> in the recent migration as the reader instance didn’t restart after we
> split 1 big file into multiple files to copy the data over but did restart
> after the indexes are created on the new table as it could be write
> intensive.
>
>
>
> *DB parameters set on migration job*
>
> work_mem set to 8 GB  and maintenace_work_mem=32 GB.
>
> Indexes per table = 3
>
> total indexes for 2 tables = 5
>
>
>
> *DB size*
>
> Cluster-2 = 8.6 TB
>
> Cluster-1 = 3.6 TB
>
> Peak Table relational rows = 400 - 480 million rows
>
> Average table relational rows = 300 - 350 million rows.
>
> Per table size = 90 -95 GB , per table index size is about 45 GB
>
>
>
> *Questions*
>
> 1) Can we decrease the maintenace_work_mem to 16 GB and will it slow down
> the writes to the cluster , with that the reader instance can sync the data
> slowly ?.
>
> 2) Based on the above use case what are your recommendations to keep the
> data longer up to 2 years ?
>
> 3) What other recommendations you recommend ?.
>
>
>
>
>
> Appreciate your replies.
>
> THanks
> githubkran
>
>>


Re: Wall shiping replica failed to recover database with error: invalid contrecord length 1956 at FED/38FFE208

2019-11-14 Thread Tom Lane
Kyotaro Horiguchi  writes:
> At Wed, 2 Oct 2019 19:24:02 -0400, Stephen Frost  wrote 
> in 
>> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>>> Yeah, those messages are all pretty ancient, from when WAL was new and not
>>> to be trusted much.  Perhaps the thing to do is move the existing info
>>> into DETAIL and make the primary message be something like "reached
>>> apparent end of WAL stream".

>> Yes, +1 on that.

> What do you think about this?

It seems overcomplicated.  Why do you need to reach into
emode_for_corrupt_record's private state?  I think we could just
change the message text without that, and leave the emode
machinations as-is.

I don't understand the restriction to "if (RecPtr == InvalidXLogRecPtr)"
either?  Maybe that's fine but the comment fails to explain it.

Another point is that, as the comment for emode_for_corrupt_record
notes, we don't really want to consider that we've hit end-of-WAL
when reading any source except XLOG_FROM_PG_WAL.  So I think the
change of message ought to include a test of the source, but this
doesn't.

Also, the business with an "operation" string violates the message
translatability guideline about "don't assemble a message out of
phrases".  If we want to have that extra detail, it's better just
to make three separate ereport() calls with separately translatable
messages.

Also, it seems wrong that the page TLI check, just below, is where
it is and isn't part of the main set of page header sanity checks.
That's sort of unrelated to this patch, except not really, because
shouldn't a failure of that test also be treated as an "end of WAL"
condition?

regards, tom lane




Query which shows FK child columns?

2019-11-14 Thread Ron

v9.6.16

I have a query which shows the parents and children in FK relations, along 
with the parent column name, but can't seem to find the child column names.


Is there a way to find the child column names without having to dig into 
pg_constraint?


Thanks

test=# select ccu.table_schema||'.'||ccu.table_name as parent_table,
test-#    ccu.column_name as parent_column,
test-#    tc.table_schema||'.'||tc.table_name as child_table,
test-# ccu.constraint_schema||'.'||ccu.constraint_name as con_name
test-# from information_schema.table_constraints tc,
test-#  information_schema.constraint_column_usage ccu
test-# where tc.constraint_type = 'FOREIGN KEY'
test-#   and tc.constraint_schema = ccu.constraint_schema
test-#   and tc.constraint_name = ccu.constraint_name
test-# order by parent_table, child_table, ccu.column_name
test-# ;
    parent_table | parent_column | child_table |  con_name
-+---+-+
 public.inventory    | inv_id    | public.sales_detail | 
public.sales_detail_sd_inv_id_fkey
 public.sales_header | parent_id | public.sales_detail | 
public.sales_detail_id_fkey

(2 rows)

test=# \d inventory
 Table "public.inventory"
   Column    |  Type  | Modifiers
-++---
 inv_id  | integer    | not null
 description | character varying(255) |
Indexes:
    "inventory_pkey" PRIMARY KEY, btree (inv_id)
Referenced by:
    TABLE "sales_detail" CONSTRAINT "sales_detail_sd_inv_id_fkey" FOREIGN 
KEY (sd_inv_id) REFERENCES inventory(inv_id)


test=# \d sales_header
   Table "public.sales_header"
  Column   |  Type   | Modifiers
---+-+---
 parent_id | integer | not null
Indexes:
    "sales_header_pkey" PRIMARY KEY, btree (parent_id)
Referenced by:
    TABLE "sales_detail" CONSTRAINT "sales_detail_id_fkey" FOREIGN KEY 
(child_id) REFERENCES sales_header(parent_id)


test=# \d sales_detail
   Table "public.sales_detail"
  Column   |  Type   | Modifiers
---+-+---
 child_id  | integer | not null
 seq   | integer | not null
 sd_inv_id | integer |
Indexes:
    "sales_detail_pkey" PRIMARY KEY, btree (child_id, seq)
Foreign-key constraints:
    "sales_detail_id_fkey" FOREIGN KEY (child_id) REFERENCES 
sales_header(parent_id)
    "sales_detail_sd_inv_id_fkey" FOREIGN KEY (sd_inv_id) REFERENCES 
inventory(inv_id)





--
Angular momentum makes the world go 'round.




Re: Query which shows FK child columns?

2019-11-14 Thread Jeff Ross

On 11/14/19 11:49 AM, Ron wrote:

v9.6.16

I have a query which shows the parents and children in FK relations, 
along with the parent column name, but can't seem to find the child 
column names.


Is there a way to find the child column names without having to dig 
into pg_constraint?


Thanks



I do not think you can do this without using pg_constraint.

I've been using this function to display those FKs.  The original code 
isn't mine but as I recall I had to tweak it a little.


This is on 10 and I can't remember if this was used on 9.6 but I'd be 
surprised if any of this won't work on 9.6.


client@cargotel_dev> \sf cargotel_common.show_foreign_keys(text)
CREATE OR REPLACE FUNCTION cargotel_common.show_foreign_keys(tablename text)
 RETURNS TABLE(table1 text, column1 text, type text, table2 text, 
column2 text)

 LANGUAGE plpgsql
AS $function$
    declare
    schemaname text;
    begin
    select into schemaname current_schema();
    return query
    execute format('
    select
    conrelid::regclass::text as table1,
    a.attname::text as column1,
    t.typname::text as type,
    confrelid::regclass::text as table2,
    af.attname::text as column2
    from
    pg_attribute af,
    pg_attribute a,
    pg_type t,
    (
    select
    conrelid,
    confrelid,
    conkey[i] as conkey,
    confkey[i] as confkey
    from (
    select
    conrelid,
    confrelid,
    conkey,
    confkey,
    generate_series(1,array_upper(conkey,1)) as i
    from
    pg_constraint
    where contype = ''f''
    )
    ss) ss2
    where
    af.attnum = confkey and
    af.attrelid = confrelid and
    a.attnum = conkey and
    a.attrelid = conrelid and
    a.atttypid = t.oid and
    confrelid::regclass = ''%I.%I''::regclass
 order by 1,2;',schemaname,tablename);
    end;
$function$

I use column headings "table 1, column1, table2, column2" but It's easy 
enough to tweak the column labels.


Example:


client@cargotel_dev> \d+ ref_acct_cache
 Table 
"client.ref_acct_cache"
 Column │  Type   │ Collation │ Nullable │ Default   │ 
Storage  │ Stats target │ Description

┼─┼───┼──┼┼──┼──┼─
 id │ integer │   │ not null │ 
nextval('ref_acct_cache_id_seq'::regclass) │ plain │  │
 descr  │ text    │   │ 
│    │ extended │  │

Indexes:
    "ref_acct_cache_pkey" PRIMARY KEY, btree (id)
Referenced by:
    TABLE "acct_cache" CONSTRAINT 
"acct_cache_type_id_ref_acct_cache_id_fk" FOREIGN KEY (type_id) 
REFERENCES ref_acct_cache(id)



client@cargotel_dev> select * from 
cargotel_common.show_foreign_keys('ref_acct_cache');

   table1   │ column1 │ type │ table2 │ column2
┼─┼──┼┼─
 acct_cache │ type_id │ int4 │ ref_acct_cache │ id
(1 row)


client@cargotel_dev> \d+ acct_cache
 Table 
"client.acct_cache"
    Column │   Type   │ Collation │ Nullable 
│    Default │ Storage  │ Stats target │ 
Description

───┼──┼───┼──┼┼──┼──┼─
 id    │ integer  │   │ not null │ 
nextval('acct_cache_id_seq'::regclass) │ plain    │  │
 type_id   │ integer  │   │ 
│    │ plain    │ │
 prefix    │ text │   │ 
│    │ extended │ │
 data  │ text │   │ 
│    │ extended │ │
 amount    │ numeric  │   │ 
│    │ main │ │
 timestamp │ timestamp with time zone │   │ 
│    │ plain    │ │
 check_number  │ text │   │ 
│    │ extended │ │
 client_number │ text │   │ 
│    │ extended │ │
 check_date    │ date │   │ 
│    │ plain    │ │

Indexes:
    "acct_cache_pkey" PRIMARY KEY, btree (id)
    "acct_cache_prefix_type_id_data_idx" btree (prefix, type_id, data)
    "acct_ca

Fwd: PostGreSQL Replication and question on maintenance

2019-11-14 Thread github kran
> Hello postGreSQL Community ,
>
>
>
> Hope everyone is doing great !!.
>
>
> *Background*
>
> We use PostgreSQL Version 10.6 version and heavily use PostgreSQL for our
> day to day activities to write and read data. We have 2 clusters running
> PostgreSQL engine , one cluster
>
> keeps data up to 60 days and another cluster retains data beyond 1 year.
> The data is partitioned close to a week( ~evry 5 days a partition) and we
> have around 5 partitions per month per each table and we have 2 tables
> primarily so that will be 10 tables a week. So in the cluster-1 we have
> around  20 partitions and in cluster-2 we have around 160 partitions ( data
> from 2018). We also want to keep the data for up to 2 years in the
> cluster-2 to serve the data needs of the customer and so far we reached
> upto 1 year of maintaining this data.
>
>
>
> *Current activity*
>
> We have a custom weekly migration DB script job that moves data from 1
> cluster to another cluster what it does is the below things.
>
> 1) COPY command to copy the data from cluster-1 and split that data into
> binary files
>
> 2) Writing the binary data into the cluster-2 table
>
> 3) Creating indexes after the data is copied.
>
>
>
> *Problem what we have right now. *
>
> When the migration activity runs(weekly) from past 2 times , we saw the
> cluster read replica instance has restarted as it fallen behind the
> master(writer instance). Everything
>
> after that worked seamlessly but we want to avoid the replica getting
> restarted. To avoid from restart we started doing smaller binary files and
> copy those files to the cluster-2
>
> instead of writing 1 big file of 450 million records. We were successful
> in the recent migration as the reader instance didn’t restart after we
> split 1 big file into multiple files to copy the data over but did restart
> after the indexes are created on the new table as it could be write
> intensive.
>
>
>
> *DB parameters set on migration job*
>
> work_mem set to 8 GB  and maintenace_work_mem=32 GB.
>
> Indexes per table = 3
>
> total indexes for 2 tables = 5
>
>
>
> *DB size*
>
> Cluster-2 = 8.6 TB
>
> Cluster-1 = 3.6 TB
>
> Peak Table relational rows = 400 - 480 million rows
>
> Average table relational rows = 300 - 350 million rows.
>
> Per table size = 90 -95 GB , per table index size is about 45 GB
>
>
>
> *Questions*
>
> 1) Can we decrease the maintenace_work_mem to 16 GB and will it slow down
> the writes to the cluster , with that the reader instance can sync the data
> slowly ?.
>
> 2) Based on the above use case what are your recommendations to keep the
> data longer up to 2 years ?
>
> 3) What other recommendations you recommend ?.
>
>
>
>
>
> Appreciate your replies.
>
> THanks
> githubkran
>
>>


Re: PostGreSQL Replication and question on maintenance

2019-11-14 Thread Pavel Stehule
pá 15. 11. 2019 v 6:26 odesílatel github kran  napsal:

>
> Hello postGreSQL Community ,
>>
>>
>>
>> Hope everyone is doing great !!.
>>
>>
>> *Background*
>>
>> We use PostgreSQL Version 10.6 version and heavily use PostgreSQL for our
>> day to day activities to write and read data. We have 2 clusters running
>> PostgreSQL engine , one cluster
>>
>> keeps data up to 60 days and another cluster retains data beyond 1 year.
>> The data is partitioned close to a week( ~evry 5 days a partition) and we
>> have around 5 partitions per month per each table and we have 2 tables
>> primarily so that will be 10 tables a week. So in the cluster-1 we have
>> around  20 partitions and in cluster-2 we have around 160 partitions ( data
>> from 2018). We also want to keep the data for up to 2 years in the
>> cluster-2 to serve the data needs of the customer and so far we reached
>> upto 1 year of maintaining this data.
>>
>>
>>
>> *Current activity*
>>
>> We have a custom weekly migration DB script job that moves data from 1
>> cluster to another cluster what it does is the below things.
>>
>> 1) COPY command to copy the data from cluster-1 and split that data into
>> binary files
>>
>> 2) Writing the binary data into the cluster-2 table
>>
>> 3) Creating indexes after the data is copied.
>>
>>
>>
>> *Problem what we have right now. *
>>
>> When the migration activity runs(weekly) from past 2 times , we saw the
>> cluster read replica instance has restarted as it fallen behind the
>> master(writer instance). Everything
>>
>> after that worked seamlessly but we want to avoid the replica getting
>> restarted. To avoid from restart we started doing smaller binary files and
>> copy those files to the cluster-2
>>
>> instead of writing 1 big file of 450 million records. We were successful
>> in the recent migration as the reader instance didn’t restart after we
>> split 1 big file into multiple files to copy the data over but did restart
>> after the indexes are created on the new table as it could be write
>> intensive.
>>
>>
>>
>> *DB parameters set on migration job*
>>
>> work_mem set to 8 GB  and maintenace_work_mem=32 GB.
>>
>
these numbers looks crazy high - how much memory has your server - more
than 1TB?


Indexes per table = 3
>>
>> total indexes for 2 tables = 5
>>
>>
>>
>> *DB size*
>>
>> Cluster-2 = 8.6 TB
>>
>> Cluster-1 = 3.6 TB
>>
>> Peak Table relational rows = 400 - 480 million rows
>>
>> Average table relational rows = 300 - 350 million rows.
>>
>> Per table size = 90 -95 GB , per table index size is about 45 GB
>>
>>
>>
>> *Questions*
>>
>> 1) Can we decrease the maintenace_work_mem to 16 GB and will it slow down
>> the writes to the cluster , with that the reader instance can sync the data
>> slowly ?.
>>
>> 2) Based on the above use case what are your recommendations to keep the
>> data longer up to 2 years ?
>>
>> 3) What other recommendations you recommend ?.
>>
>>
>>
>>
>>
>> Appreciate your replies.
>>
>> THanks
>> githubkran
>>
>>>


Weird ranking results with ts_rank

2019-11-14 Thread Javier Ayres
Hi everybody.

I'm implementing a solution that uses PostgreSQL's full text search
capabilities and I have come across a particular set of results for ts_rank
that don't seem to make sense according to the documentation. I have tried
the following queries in PostgreSQL 10, 11 and 12.
In both cases only the word "box" is matching, but adding a non-matching
word with OR to the query increases the ranking. If I keep adding more
non-matching words with OR the ranking starts to decrease again, but I
would imagine that the second option should have the highest score and it
would start decreasing from there the more non-matching words I add.
Is there something I'm not understanding?

Thanks.

postgres=# select ts_rank(to_tsvector('search for a text box'),
to_tsquery('circle | lot <-> box'));
   ts_rank
-
 0.020264236
(1 row)

postgres=# select ts_rank(to_tsvector('search for a text box'),
to_tsquery('lot <-> box'));
 ts_rank
-
   1e-20
(1 row)

-- 
Javier Ayres
Data Engineer
+1 855 636 5811 <+18556365811> - sophilabs.co