Re: Performance of INSERT into temporary tables using psqlODBC driver

2018-09-16 Thread padusuma
Hello Tim,

>Re-implementation of a solution is often a hard case to sell, but it 
>might be the only way to get the performance you want. The big positive 
>to a re-implementation is that you usually get a better solution because 
>you are implementing with more knowledge and experience about the 
>problem domain. Design is often cleaner and as a result, easier to 
>maintain. It usually takes a lot less time than the original 
>implementation as well and can be the more economical solution compared 
>to fighting a system which has fundamental design limitations that 
>restrict performance.

Thank you for the suggestions and advice. I will definitely look into
re-implementation of certain parts of our solution as an option to improve
performance.




--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Advice on machine specs for growth

2018-09-16 Thread Rory Campbell-Lange
It is some time since I've written to the postgres lists. My apologies
if this is the wrong list to post this to.

We are looking to upgrade our current database server infrastructure so
that it is suitable for the next 3 years or so.

Presently we have two physical servers with the same specs:

- 220GB database partition on RAID10 SSD on HW RAID
- 128GB RAM
- 8 * Xeon E5-2609

(The HW RAID card is a MegaRAID SAS 9361-8i with BBU)

The second server is a hot standby to the first, and we presently have
about 350 databases in the cluster. 

We envisage needing about 800GB of primary database storage in the next
three years, with 1000 databases in the cluster.

We are imagining either splitting the cluster into two and (to have four
main servers) or increasing the disk capacity and RAM in each server.
The second seems preferable from a day-to-day management basis, but it
wouldn't be too difficult to deploy our software upgrades across two
machines rather than one.

Resources on the main machines seem to be perfectly adequate at present
but it is difficult to know at what stage queries might start spilling
to disk. We presently occasionally hit 45% CPU utilisation, load average
peaking at 4.0 and we occasionally go into swap in a minor way (although
we can't determine the reason for going into swap). There is close to no
iowait in normal operation.

It also seems a bit incongruous writing about physical machines these
days, but I can't find pricing on a UK data protection compatible cloud
provider that beats physical price amortised over three years (including
rack costs). The ability to more easily "make" machines to help with
upgrades is attractive, though.

Some comments and advice on how to approach this would be very
gratefully received.

Thanks
Rory






Re: How Do You Associate a Query With its Invoking Procedure?

2018-09-16 Thread Fred Habash
All great ideas.

I was thinking something similar to some other RDBMS engines where SQL is 
automatically tied to the invoking PROGRAM_ID  with zero setup on the client 
side. I thought there could be something similar in PG somewhere in the 
catalog. 

As always, great support. This level of support helps a lot in our migration to 
Postgres. 

-
Thank you. 

> On Sep 15, 2018, at 5:24 AM, Patrick Molgaard  wrote:
> 
> You might find application-level tracing a more practical answer - e.g. check 
> out Datadog APM for a (commercial) plug and play approach or Jaeger for a 
> self-hostable option.
> 
> Patrick
>> On Fri, Sep 14, 2018 at 4:38 PM Fred Habash  wrote:
>> Any ideas, please? 
>> 
>>> On Thu, Sep 13, 2018, 3:49 PM Fd Habash  wrote:
>>> In API function may invoke 10 queries. Ideally, I would like to know what 
>>> queries are invoked by it and how long each took.
>>> 
>>>  
>>> 
>>> I’m using pg_stat_statement. I can see the API function statement, but how 
>>> do I deterministically identify all queries invoked by it?
>>> 
>>>  
>>> 
>>>  
>>> 
>>> 
>>> Thank you
>>> 
>>>