Connection hangs on new created schema

2018-03-27 Thread Juan Manuel Cuello
I have a postgresql database with around 4000 schemas. Each schema has
around the same 65 tables. There are 8 processes connected to the database
(it is a web app). Each process has only one direct connection to the
database (no connection pool).

Everything works ok until a new schema (with all its tables, indices, etc)
is added to the database. When this happens, the next query made to the
database takes too long, sometimes more than a minute o two. Also I see a
high DB CPU usage during that lapse. Then everything continues working ok
as usual and CPU drops to normal levels.

I'm mostly sure this has to do with the amount of schemas, maybe related to
relcache, but not sure why creating new schemas causes all this and where
to look to solve this problem.

i can see also the server hast a lot of free memory and is no swapping at
all.

OS: Linux
Server version: 9.3.20

Any insight will be highly appreciated.


Re: Connection hangs on new created schema

2018-03-28 Thread Juan Manuel Cuello
On Wed, Mar 28, 2018 at 2:58 AM, Laurenz Albe 
wrote:

> Juan Manuel Cuello wrote:
> > I have a postgresql database with around 4000 schemas. Each schema has
> around
> > the same 65 tables. There are 8 processes connected to the database (it
> is a web app).
> > Each process has only one direct connection to the database (no
> connection pool).
> >
> > Everything works ok until a new schema (with all its tables, indices,
> etc)
> > is added to the database. When this happens, the next query made to the
> database
> > takes too long, sometimes more than a minute o two. Also I see a high DB
> CPU usage
> > during that lapse. Then everything continues working ok as usual and CPU
> drops to normal levels.
> >
> > I'm mostly sure this has to do with the amount of schemas, maybe related
> to relcache,
> > but not sure why creating new schemas causes all this and where to look
> to solve this problem.
>
> If you know the slow query EXPLAIN (ANALYZE, BUFFERS) it and see where the
> time
> is spent.
>


I doesn't seem to depend on the query. Immediately after the new schema is
created, the next query takes a lot of time. It eventually resolves and
next statements are executed ok.

I think it is related to something the db processes are doing when the new
schema is created (maybe reloading relcache?), because the db processes
consumes a lot of CPU during that lapse. After a while, everything goes
back to normality.


High WriteLatency RDS Postgres 9.3.20

2018-06-18 Thread Juan Manuel Cuello
Hi,

I'm experiencing high WriteLatency levels in a Postgres server 9.3.20
hosted in Amazon RDS. So far it's been almost two months of investigation
and people at AWS technical support don't seem to find the cause. I think
it could be related to Postgres and the number of schema/tables in the
database, that's why I post this issue here.

I have around 4600 schemas, each contains 62 tables. The DB size is only
around 130 GB. the server has plenty of available RAM, CPU usage is less
than 10% and there are only around 16 connections, but WriteLatency is
unusually high.

As I don't have access to the server, I cannot see which are the process
that are wiring to disk, but my guess is that each Postgres process is
writing to disk for some reason.

This issue doesn't seem related to workload. If I restart the server,
WriteLatency drops to normal levels and remains like that until, after some
time (a few hours or a day), without any obvious reason, it spikes again
and continues at high levels since then.

Is it possible that, for some reason, Postgres processes start writing to
disk at some point due to reaching any internal limit? Maybe related to
relcache/catcache/syscache? Any other thoughts?

Thanks

Juan


Re: High WriteLatency RDS Postgres 9.3.20

2018-06-19 Thread Juan Manuel Cuello
On Mon, Jun 18, 2018 at 7:23 PM Andres Freund  wrote:

> > So far it's been almost two months of investigation
> > and people at AWS technical support don't seem to find the cause. I think
> > it could be related to Postgres and the number of schema/tables in the
> > database, that's why I post this issue here.
>
> There've been improvements made since 9.3. Upgrade.
>

You are right, and I'm aware of that (I'm planning a version upgrade), I
just wanted to know if anybody knew if the number of schema/tables could be
the cause of high write levels due to Postgres processes reaching some
internal limit.

Thanks.

Juan


Re: High WriteLatency RDS Postgres 9.3.20

2018-06-19 Thread Juan Manuel Cuello
On Tue, Jun 19, 2018 at 12:16 AM Benjamin Scherrey <
scher...@proteus-tech.com> wrote:

> I would also add that AWS' I/O capabilities are quite poor and expensive.
> I assume that you have tried purchasing additional IOOPs on that setup to
> see whether you got an expected speed up? If not you should try that as a
> diagnostic tool even if you wouldn't want to pay that on an ongoing basis.
>

I haven't tried increasing available IOPS, but looking at the metrics, I'm
far away of the limit, so it doesn't seem to be related, but I will explore
this option further.

We have a client that is I/O write bound and it has taken us significant
> efforts to get it to perform well on AWS. We definitely run our own
> instances rather than depend on RDS and have always been able to outperform
> RDS instances which seem to really be focused to provide a PAAS capability
> for developers who really don't want to have to understand how a db works.
> Running our identical environment on bare metal is like night & day under
> any circumstances when compared to AWS.
>
> Client's requirement is AWS so we keep working on it and we like AWS for
> many things but understand it will always underperform on I/O.
>
> Post actual measurements with and without IOOPs or create your own PG
> server instance and then people might be able to give you additional
> insights.
>

I'll consider your suggestions and I'll back with more info in case I
create my own environment, I just wanted to know if the number of
schemas/tables could be the cause of high writes levels, in order to
discard this hypothesis.

Thanks


Re: Slow shutdowns sometimes on RDS Postgres

2018-09-14 Thread Juan Manuel Cuello
On Thu, Sep 13, 2018 at 7:05 PM Chris Williams  wrote:

> We just have a few ruby on rails applications connected to the database,
> and don't really have any long running or heavy queries and the db is under
> very light load, so I don't understand why it takes so long to shutdown.
> We do have a sizeable number of connections though (about 600) and there
> are two replicas connected to it.  I also tried
> setting idle_in_transaction_session_timeout to 300 seconds to see if that
> would help, but it made no difference.
>

If you only have a few rails apps connected to the DB and a very light
load, may be you can try reducing the number of connections (how big are
your connection pools?) and see if that helps. 600 seems to be a big value
for just a few apps and a light load.