As detailed below. The collection where we have issues have 16 shards with
2 replica each.
On Sun, May 10, 2020, 9:10 PM matthew sporleder
wrote:
> Why so many shards?
>
> > On May 10, 2020, at 9:09 PM, Ganesh Sethuraman
> wrote:
> >
> > We are using dedicated host, Cent OS in EC2 r5.12xlarge
Why so many shards?
> On May 10, 2020, at 9:09 PM, Ganesh Sethuraman
> wrote:
>
> We are using dedicated host, Cent OS in EC2 r5.12xlarge (48 CPU, ~360GB
> RAM), 2 nodes. Swapiness set to 1. With General purpose 2T EBS SSD volume.
> JVM size of 18gb, with G1 GC enabled. About 92 collection w
We are using dedicated host, Cent OS in EC2 r5.12xlarge (48 CPU, ~360GB
RAM), 2 nodes. Swapiness set to 1. With General purpose 2T EBS SSD volume.
JVM size of 18gb, with G1 GC enabled. About 92 collection with average of 8
shards and 2 replica each. Most of updates over daily batch updates.
Whil
On 5/10/2020 4:48 PM, Ganesh Sethuraman wrote:
The additional info is that when we execute the test for longer (20mins) we
are seeing better response time, however for a short test (5mins) and rerun
the test after an hour or so we are seeing slow response times again. Note
that we don't update th
Here is a quick update based on your question, and few additional
information that will help
The additional info is that when we execute the test for longer (20mins) we
are seeing better response time, however for a short test (5mins) and rerun
the test after an hour or so we are seeing slow respo
Do not, repeat NOT expungeDelete after each deleteByQuery, it is
a very expensive operation. Perhaps after the nightly batch, but
I doubt that’ll help much anyway.
30% deleted docs is quite normal, and should definitely not
change the response time by a factor of 100! So there’s
some other issue