PostgreSQL suitable?

2017-12-19 Thread Kellner Thiemo
Hi

We are developing a data warehouse of which the integration layer will start 
with over 100 TB of data. There are not many entities though we probably can 
partition and foremost we should use inheritance for the lab results. I just 
was wondering if PostgreSQL was able to cope with. In case it depends on the 
modelling kind, we have not yet decided between classic erd, anchor modelling 
and data vault.

Does someone have experience with such a set up?

Kind regards

Thiemo



RE: PostgreSQL suitable?

2017-12-19 Thread Kellner Thiemo
Hi

Thank you all for your thoughts and the light shedding on this subject. I am 
very pleased to have got such engaged feedback and also that PostgreSQL seems 
to be so capable. I am quite sad that we most probably will not setup our DWH 
with it as RDBMS. It is, I fear, a quite in-house-political decision.

Kind regards

Thiemo



RE: Mailing list archiver

2018-01-02 Thread Kellner Thiemo
Looks nice, thanks, however, I could not find select posts to this list from 
20th December and 19th Dezember respectively.

Kind regards Thiemo


RE: pg_basebackup is taking more time than expected

2018-01-15 Thread Kellner Thiemo
> According to our previous discussion, pg_basebackup is not depend on any of 
> the postgresql configuration parameters. If I go for gzip format we need to 
> compromise on time.

You do not necessarily compromise on time when compressing. Actually, a speed 
gain compared to uncompressed is possible. The better data is being compressed 
the less has to be written on (slow) disk. However, it comes with a CPU load 
penalty. I suggest you experiment with setups. Maybe --compress is doing still 
well with respect to compression rate but consumes considerably less CPU time.