Re: waiting for client write
On Wed, Jun 9, 2021 at 4:47 PM Ayub Khan wrote: > > attached is the screenshot of RDS performance insights for AWS and it shows > high waiting client writes. The api performance is slow. I read that this > might be due to IOPS on RDS. However we have 80k IOPS on this test RDS. > ClientWrite means Postgres is waiting on the *network* sending the reply back to the client, it is unrelated to I/O. So either your client isn't consuming the response fast enough, or the network between them is too slow or shaped. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/
Re: waiting for client write
@Magnus
There is an EC2 tomcat server which communicates to postgresql. This is a
replica of our production server except that in this case the test database
is postgres RDS and our production is running oracle on EC2 instance.
On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote:
> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performance is slow. I read that
> this might be due to IOPS on RDS. However we have 80k IOPS on this test
> RDS.
>
> Below is the query which is being load tested
>
> SELECT
>
>a.menu_item_id,
>a.menu_item_name,
>a.menu_item_category_id,
>b.menu_item_category_desc,
>c.menu_item_variant_id,
>c.menu_item_variant_type_id,
>c.price,
>c.size_id,
>c.parent_menu_item_variant_id,
>d.menu_item_variant_type_desc,
>e.size_desc,
>f.currency_code,
>a.image,
>a.mark_id,
>m.mark_name
>
> FROM .menu_item_category AS b, .menu_item_variant AS
> c,
>.menu_item_variant_type AS d, .item_size AS e,
> .restaurant AS f,
>.menu_item AS a
>
>LEFT OUTER JOIN .mark AS m
> ON (a.mark_id = m.mark_id)
>
> WHERE a.menu_item_category_id =
> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND
>c.menu_item_variant_type_id =
> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND
>c.size_id = e.size_id AND a.restaurant_id =
> f.restaurant_id AND f.restaurant_id = 1528 AND
>(a.menu_item_category_id = NULL OR NULL IS NULL)
>
>AND c.menu_item_variant_id = (SELECT
> min(menu_item_variant_id)
> FROM
> .menu_item_variant
> WHERE
> menu_item_id = a.menu_item_id AND deleted = 'N'
> LIMIT 1) AND
> a.active = 'Y'
>AND (CONCAT_WS('', ',', a.hidden_branch_ids,
> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR
> NULL IS NULL)
>AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'
>
>ORDER BY a.row_order, menu_item_id;
>
> --Ayub
>
--
Sun Certified Enterprise Architect 1.5
Sun Certified Java Programmer 1.4
Microsoft Certified Systems Engineer 2000
http://in.linkedin.com/pub/ayub-khan/a/811/b81
mobile:+966-502674604
--
It is proved that Hard Work and kowledge will get you close but attitude
will get you there. However, it's the Love
of God that will put you over the top!!
Re: slow query
On Tue, Jun 8, 2021 at 12:32 PM Ayub Khan wrote: > In AWS RDS performance insights the client writes is high and the api > which receives data on the mobile side is slow during load test. > That indicates a client or network problem. Jeff
Re: slow query
Below is the test setup
Jmeter-->(load balanced tomcat on ec2 instances)>rds read replicas
All these are running on different ec2 instances in AWS cloud in the same
region
On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote:
>
> I checked all the indexes are defined on the tables however the query
> seems slow, below is the plan. Can any one give any pointers to verify ?
>
> SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id,
> b.menu_item_category_desc, c.menu_item_variant_id,
> c.menu_item_variant_type_id, c.price, c.size_id,
> c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc,
> f.currency_code, a.image, a.mark_id, m.mark_name
>
> FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type
> AS d, item_size AS e, restaurant AS f, menu_item AS a
>
> LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE
> a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id =
> c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id
> AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id =
> f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id =
> NULL OR NULL IS NULL)
>
> AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM
> menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit
> 1) AND a.active = 'Y'
> AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('',
> '%,4191,%') OR NULL IS NULL)
> AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y'
>
> ORDER BY a.row_order, menu_item_id;
>
>
> below is the plan
>
>
> Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885
> rows=89 loops=1)
> " Sort Key: a.row_order, a.menu_item_id"
> Sort Method: quicksort Memory: 48kB
> -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual
> time=0.188..5.809 rows=89 loops=1)
> Join Filter: (a.mark_id = m.mark_id)
> Rows Removed by Join Filter: 267
> -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual
> time=0.181..5.629 rows=89 loops=1)
> -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual
> time=0.174..5.443 rows=89 loops=1)
> -> Nested Loop (cost=4.61..185.57 rows=1 width=144)
> (actual time=0.168..5.272 rows=89 loops=1)
> -> Nested Loop (cost=4.32..185.25 rows=1
> width=136) (actual time=0.162..5.066 rows=89 loops=1)
> -> Nested Loop (cost=0.71..179.62 rows=1
> width=99) (actual time=0.137..3.986 rows=89 loops=1)
> -> Index Scan using
> menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87)
> (actual time=0.130..3.769 rows=89 loops=1)
> Index Cond: (restaurant_id = 1528)
> "Filter: ((active = 'Y'::bpchar)
> AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))"
> Rows Removed by Filter: 194
> -> Index Scan using
> menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1
> width=20) (actual time=0.002..0.002 rows=1 loops=89)
> Index Cond:
> (menu_item_category_id = a.menu_item_category_id)
> -> Index Scan using menu_item_variant_pk on
> menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual
> time=0.002..0.002 rows=1 loops=89)
> Index Cond: (menu_item_variant_id =
> (SubPlan 1))
> Filter: (a.menu_item_id = menu_item_id)
> SubPlan 1
> -> Limit (cost=3.17..3.18 rows=1
> width=8) (actual time=0.009..0.009 rows=1 loops=89)
> -> Aggregate (cost=3.17..3.18
> rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89)
> -> Index Scan using
> "idx$$_023a0001" on menu_item_variant (cost=0.43..3.15 rows=8 width=8)
> (actual time=0.004..0.007 rows=7 loops=89)
> Index Cond:
> (menu_item_id = a.menu_item_id)
> Filter: (deleted =
> 'N'::bpchar)
> Rows Removed by
> Filter: 4
> -> Index Scan using menu_item_variant_type_pk on
> menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual
> time=0.002..0.002 rows=1 loops=89)
> Index Cond: (menu_item_variant_type_id =
> c.menu_item_variant_type_id)
> Filter: ((is_hidden)::text = 'false'::text)
> -> Index Scan using s
Page File Size Reached Critical Threshold PostgreSQL V13
Hello Everyone, I trust that everyone is Keep doing very Well ! We have installed PostgreSQL V13 on window’s server 2016, where we kept the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some performance issues regarding the database, after deep dive into it we came up and increased the Shared buffer size to 16 Gb. After the changed I am not sure we are facing that Page file Size reached to critical threshold. Currently the Page File size is 9504MB. Highly Appreciated, if you guys could recommend/suggest any solution / idea. Br, Haseeb Ahmad
Re: AWS forcing PG upgrade from v9.6 a disaster
Having now successfully migrated from PostgreSQL v9.6 to v13.2 in Amazon
RDS, I wondered, why I am paying AWS for an RDS-based version, when I
was forced by their POLICY to go through the effort I did? I'm not one
of the crowd who thinks, "It works OK, so I don't update anything". I'm
usually one who is VERY quick to apply upgrades, especially when there
is a fallback ability. However, the initial failure to successfully
upgrade from v9.6 to any more recent major version, put me in a
time-limited box that I really don't like to be in.
If I'm going to have to deal with maintenance issues, like I easily did
when I ran native PostgreSQL, why not go back to that? So, I've ported
my database back to native PostgreSQL v13.3 on an AWS EC2 instance. It
looks like I will save about 40% of the cost, which is in accord with
this article: https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/
Why am I mentioning this here? Because there were minor issues &
benefits in porting back to native PostgreSQL, that may be of interest here:
First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a
superuser, & it tries to dump protected stuff. If there is a way around
that, I'd like to know it, even though it's not an issue now. pg_dump
works OK, but of course you don't get the roles dumped. Fortunately, I
kept script files that have all the database setup, so I just ran them
to create all the relationships, & then used the pg_dump output. Worked
flawlessly.
Second, I noticed that the compressed ("-Z6" level) output from pg-dump
is less than one-tenth of the disk size of the restored database.
That's LOT less than the size of the backups that AWS was charging me for.
Third, once you increase your disk size in RDS, you can never decrease
it, unless you go through the above port to a brand new instance (RDS or
native PostgreSQL). RDS backups must be restored to the same size
volume (or larger) that they were created for. A VACUUM FULL ANALYZE on
RDS requires more than doubling the required disk size (I tried with
less several times). This is easily dealt with on an EC2 Linux
instance, requiring only a couple minutes of DB downtime.
Fourth, while AWS is forcing customers to upgrade from v9.6, but the
only PostgreSQL client tools that AWS currently provides in their
standard repository are for v9.6!!! That means when you want to use any
of their client tools on newer versions, you have problems. psql gives
you a warning on each startup, & pg_dump simply (& correctly) won't back
up a newer DB. If you add their "optional" repository, you can use
v12.6 tools, but v13.3 is only available by hand-editing the repo file
to include v13 (which I did). For this level of support, I pay extra?
I don't think so.
Finally, the AWS support forums are effectively "write-only." Most of
the questions asked there, never get ANY response from other users, &
AWS only uses them to post announcements, from what I can tell. I got a
LOT more help here in this thread, & last I looked, I don't pay anyone here.
Re: Page File Size Reached Critical Threshold PostgreSQL V13
On Thu, Jun 10, 2021 at 05:45:45AM +0500, Haseeb Khan wrote: > We have installed PostgreSQL V13 on window’s server 2016, where we kept the > Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some > performance issues regarding the database, after deep dive into it we came up > and increased the Shared buffer size to 16 Gb. After the changed I am not > sure we are facing that Page file Size reached to critical threshold. > Currently the Page File size is 9504MB. Hi, How large is your DB ? (Or the "active set" of the DB, if parts of it are accessed infrequently). What was the original performance issue that led you to increase shared_buffers ? You've set shared_buffers to half of your RAM, which may be a "worst case" setting, since everything that's read into shared_buffers must first be read into the OS cache. So it may be that many blocks are cached twice, rather than relying on a smaller shared_buffers only for the "hottest" blocks, and the larger OS cache for everything else. There are exceptions to the guideline - for example, if your DB is 23 GB in size, it might make sense to have the entire thing in 24GB OF shared_buffers. But most DB don't need to fit in shared_buffers, and you shouldn't make that a goal, unless you can measure a performance benefit. -- Justin
Re: Page File Size Reached Critical Threshold PostgreSQL V13
Hi Justin, You mean, So should I request for to increase the System Ram from 32 Gb to 64 Gb and keep the same parameter setting.Is it ? Br, Haseeb Ahmad > > On 10-Jun-2021, at 9:28 AM, Justin Pryzby wrote: > > On Thu, Jun 10, 2021 at 05:45:45AM +0500, Haseeb Khan wrote: >> We have installed PostgreSQL V13 on window’s server 2016, where we kept the >> Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some >> performance issues regarding the database, after deep dive into it we came >> up and increased the Shared buffer size to 16 Gb. After the changed I am not >> sure we are facing that Page file Size reached to critical threshold. >> Currently the Page File size is 9504MB. > > Hi, > > How large is your DB ? (Or the "active set" of the DB, if parts of it are > accessed infrequently). > > What was the original performance issue that led you to increase > shared_buffers ? > > You've set shared_buffers to half of your RAM, which may be a "worst case" > setting, since everything that's read into shared_buffers must first be read > into the OS cache. So it may be that many blocks are cached twice, rather > than > relying on a smaller shared_buffers only for the "hottest" blocks, and the > larger OS cache for everything else. > > There are exceptions to the guideline - for example, if your DB is 23 GB in > size, it might make sense to have the entire thing in 24GB OF shared_buffers. > But most DB don't need to fit in shared_buffers, and you shouldn't make that a > goal, unless you can measure a performance benefit. > > -- > Justin
