Hi Vijay, and thanks for replying,

>> I have a 400GB joining table (one SMALLINT and the other INTEGER -
>> Primary Keys on other tables) with 1000 fields on one side and 10M on
>> the other, so 10,000M (or 10Bn) records all told.


> My queries:

>> <query>  results with the existing setup? Does it look problematic?


> How would your table grow on either side of the join?

In this case uniformly! 1 -> 10

> Append only, static data or too frequently updated etc, or dropped 
> periodically,

Append only in this case - not updated nor dropped.

> so that delete based bloating can be skipped completely.

It can be skipped!

> How distributed is the data based on smallint keys, equally or unequally.

Totally uniform - see my own answer to my question - if it wasn't
uniform, I might have considered RANGE based partitioning?

> What kind of queries would be run and results returned ? Oltp or olap like ? 
> Quick queries with few rows retuned or heavy queries with lot of rows 
> returned.

Pretty much OLAP like - summary queries. Point queries return in
sub-millisecond range when based on PK!

> Partitioning has been ever improving, so the best option if possible would be 
> to use the latest pg version is possible,.
> Also is there any scope of normalisation of that table, I mean I know 
> theoretically it is possible, but I have not seen any design with that wide 
> table( of 1000 cols), so would be good to know.

> Just asking, maybe partitioning would the best option but wanting to know/see 
> the benefit pre and post partitioning.


Thanks again for your questions - they gave me pause for thought and I
will try to apply them in future partitioning scenarios. (Unfortunatly
:-) ) there is no magic number of partitions for, say, a given size of
table - otherwise it would be the default and would be done
automatically!

Rgs,

Pól...


> Vijay


Reply via email to