This is sufficiently atypical that many people aren't going to have enough
intuition to figure it out without seeing your metrics / logs / debugging
data (e.g. heap dumps).
My only guess, and it's a pretty big guess, is that your write timeout is
low enough (or network quality bad enough, though
Kindly help in this regard. What could be the possible reason for load and
mutation spike in india data center
On 2021/07/20 00:14:56 MyWorld wrote:
> Hi Arvinder,
> It's a separate cluster. Here max partition size is 32mb.
>
> On 2021/07/19 23:57:27 Arvinder Dhillon wrote:
> > Is this the same cl
Hi Arvinder,
It's a separate cluster. Here max partition size is 32mb.
On 2021/07/19 23:57:27 Arvinder Dhillon wrote:
> Is this the same cluster with 1G partition size?
>
> -Arvinder
>
> On Mon, Jul 19, 2021, 4:51 PM MyWorld wrote:
>
> > Hi daemeon,
> > We have already tuned the TCP settings to i
Is this the same cluster with 1G partition size?
-Arvinder
On Mon, Jul 19, 2021, 4:51 PM MyWorld wrote:
> Hi daemeon,
> We have already tuned the TCP settings to improve the bandwidth. Earlier
> we had lot of hint and mutation msg drop which were gone after tuning TCP.
> Moreover we are writing
Hi daemeon,
We have already tuned the TCP settings to improve the bandwidth. Earlier we
had lot of hint and mutation msg drop which were gone after tuning TCP.
Moreover we are writing with CL local quorum at US side, so ack is taken
from local DC.
I m still concern what could be reason of increase
Hi Patrick,
Currently we are using 3.11.6 apache cassandra version. We are performing
write with CL local quorum in US side DC. We have 4-5 tables with
supplier_details, supplier_prod_details, supplier_rating. We also have an
mview attached with rating table.
For batching part, I need to check with