Re: delete on table with many partitions uses a lot of ram

2019-03-10 Thread reg_pg_stefanz
I must have missed this, I did not immediately realize there was a 
difference between select and delete

Thanks for the explanation and outlook.
Stefan





migration of 100+ tables

2019-03-10 Thread Julie Nishimura
Hello friends, I will need to migrate 500+ tables  from one server (8.3) to 
another (9.3). I cannot dump and load the entire database due to storage 
limitations (because the source is > 20 TB, and the target is about 1.5 TB).

I was thinking about using pg_dump with customized -t flag, then use restore. 
The table names will be in the list, or I could dump their names in a table.  
What would be your suggestions on how to do it more efficiently?

Thank you for your ideas, this is great to have you around, guys!




Re: migration of 100+ tables

2019-03-10 Thread Adrian Klaver

On 3/10/19 5:53 PM, Julie Nishimura wrote:
Hello friends, I will need to migrate 500+ tables  from one server (8.3) 
to another (9.3). I cannot dump and load the entire database due to 
storage limitations (because the source is > 20 TB, and the target is 
about 1.5 TB).


I was thinking about using pg_dump with customized -t flag, then use 
restore. The table names will be in the list, or I could dump their 
names in a table.  What would be your suggestions on how to do it more 
efficiently?


The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html



Thank you for your ideas, this is great to have you around, guys!





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: migration of 100+ tables

2019-03-10 Thread Julie Nishimura
Oh, this is great news! Yay, thanks!


From: Adrian Klaver 
Sent: Sunday, March 10, 2019 6:28 PM
To: Julie Nishimura; pgsql-general@lists.postgresql.org
Subject: Re: migration of 100+ tables

On 3/10/19 5:53 PM, Julie Nishimura wrote:
> Hello friends, I will need to migrate 500+ tables  from one server (8.3)
> to another (9.3). I cannot dump and load the entire database due to
> storage limitations (because the source is > 20 TB, and the target is
> about 1.5 TB).
>
> I was thinking about using pg_dump with customized -t flag, then use
> restore. The table names will be in the list, or I could dump their
> names in a table.  What would be your suggestions on how to do it more
> efficiently?

The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html

>
> Thank you for your ideas, this is great to have you around, guys!
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com


Re: migration of 100+ tables

2019-03-10 Thread Adrian Klaver

On 3/10/19 9:07 PM, Julie Nishimura wrote:

Oh, this is great news! Yay, thanks!


Just be aware, where -n = --schema:

https://www.postgresql.org/docs/10/app-pgdump.html

"
Note

When -n is specified, pg_dump makes no attempt to dump any other 
database objects that the selected schema(s) might depend upon. 
Therefore, there is no guarantee that the results of a specific-schema 
dump can be successfully restored by themselves into a clean database.

"

So any relationships across schema will not be preserved.




*From:* Adrian Klaver 
*Sent:* Sunday, March 10, 2019 6:28 PM
*To:* Julie Nishimura; pgsql-general@lists.postgresql.org
*Subject:* Re: migration of 100+ tables
On 3/10/19 5:53 PM, Julie Nishimura wrote:
Hello friends, I will need to migrate 500+ tables  from one server (8.3) 
to another (9.3). I cannot dump and load the entire database due to 
storage limitations (because the source is > 20 TB, and the target is 
about 1.5 TB).


I was thinking about using pg_dump with customized -t flag, then use 
restore. The table names will be in the list, or I could dump their 
names in a table.  What would be your suggestions on how to do it more 
efficiently?


The sizes you mention above, are they for the uncompressed raw data?

Are the tables all in one schema or multiple?

Where I am going with this is pg_dump -Fc --schema.

See:
https://www.postgresql.org/docs/10/app-pgrestore.html

The pg_restore -l to get a TOC(Table of Contents).

Comment out the items you do not want in the TOC.

Then pg_restore  --use-list.

See:

https://www.postgresql.org/docs/10/app-pgrestore.html



Thank you for your ideas, this is great to have you around, guys!





--
Adrian Klaver
adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com



TPC-DS queries

2019-03-10 Thread Tatsuo Ishii
I played with TPC-DS and found some of them can't be executed because
of SQL errors and I am not sure why.

For example with query 36:

select  
sum(ss_net_profit)/sum(ss_ext_sales_price) as gross_margin
   ,i_category
   ,i_class
   ,grouping(i_category)+grouping(i_class) as lochierarchy
   ,rank() over (
partition by grouping(i_category)+grouping(i_class),
case when grouping(i_class) = 0 then i_category end 
order by sum(ss_net_profit)/sum(ss_ext_sales_price) asc) as 
rank_within_parent
 from
store_sales
   ,date_dim   d1
   ,item
   ,store
 where
d1.d_year = 2000 
 and d1.d_date_sk = ss_sold_date_sk
 and i_item_sk  = ss_item_sk 
 and s_store_sk  = ss_store_sk
 and s_state in ('TN','TN','TN','TN',
 'TN','TN','TN','TN')
 group by rollup(i_category,i_class)
 order by
   lochierarchy desc
  ,case when lochierarchy = 0 then i_category end -- line 25 is here.
  ,rank_within_parent
  limit 100;
psql:query_0.sql:1935: ERROR:  column "lochierarchy" does not exist
LINE 25:   ,case when lochierarchy = 0 then i_category end

I have follwed the instruction here.
https://ankane.org/tpc-ds

PostgreSQL is master branch HEAD. For me, the SQL above looks to be
valid.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



RE: Ran out of memory retrieving query results.

2019-03-10 Thread Nanda Kumar
Hello Tem,

Can you please help on the below issues . The below Error occurred when I run 
the select statement for the huge data volume.

 Error Details :

 Ran out of memory retrieving query results.

Regards
Nanda Kumar.M
SmartStream Technologies
Eastland Citadel | 5th Floor | 102 Hosur Road | Banglore 560 095 | India
nanda.ku...@smartstream-stp.com | www.smartstream-stp.com
Mob
+91 99720 44779
Tel
+91 80617 64107



-Original Message-
From: Stephen Frost [mailto:sfr...@snowman.net]
Sent: 09 March 2019 00:19
To: Nanda Kumar
Cc: pgsql-general Owner
Subject: Re: Ran out of memory retrieving query results.

Greetings,

You need to email pgsql-general@lists.postgresql.org with your question, this 
address is for the moderators.

Thanks!

Stephen

* Nanda Kumar (nanda.ku...@smartstream-stp.com) wrote:
> Hello Team,
>
>
>
> I am getting error when I try to run the select query. Kindly help me in 
> fixing this issue.
>
>
>
> Error Details :
>
>
>
> Ran out of memory retrieving query results.
>
>
>
> Screenshot of the error :
>
>
>
> [cid:image001.png@01D4D5AA.5A204D50]
>
>
>
> Regards
>
> Nanda Kumar.M
>
> SmartStream Technologies
>
> Eastland Citadel | 5th Floor | 102 Hosur Road | Banglore 560 095 |
> India
>
> nanda.ku...@smartstream-stp.com | www.smartstream-stp.com
>
> Mob +91 99720 44779
>
> 
> The information in this email is confidential and may be legally privileged. 
> It is intended solely for the addressee. Access to this email by anyone else 
> is unauthorised. If you are not the intended recipient, any disclosure, 
> copying, distribution or any action taken or omitted to be taken in reliance 
> on it, is prohibited and may be unlawful.



 The information in this email is confidential and may be legally privileged. 
It is intended solely for the addressee. Access to this email by anyone else is 
unauthorised. If you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in reliance on it, is 
prohibited and may be unlawful.