Internode speculative retry is on by default with p99
The client side retry varies by driver / client
> On Oct 17, 2021, at 1:59 PM, S G wrote:
>
>
>
> "The harder thing to solve is a bad coordinator node slowing down all reads
> coordinated by that node"
> I think this is the root of the
Also, for the percentile based speculative retry, how big of a time-period
is used to calculate the percentile?
If it is only a few seconds, then the latency will increase very quickly
when server performance degrades.
But if it is upto a few minutes (or it is configurable), then its
percentile wil
"The harder thing to solve is a bad coordinator node slowing down all reads
coordinated by that node"
I think this is the root of the problem and since all nodes act as
coordinator nodes, so it guaranteed that if any 1 node slows down (High GC,
Segment Merging etc), it will slow down 1/N queries in
Some random notes, not necessarily going to help you, but:
- You probably have vnodes enable, which means one bad node is PROBABLY a
replica of almost every other node, so the fanout here is worse than it
should be, and
- You probably have speculative retry on the table set to a percentile. As
the
Hi,
Fixed this problem finally. The "*name*" attribute for
fieldInputTransformer and fieldOutputTransformer in the solrconfig.xml MUST
have the value "*dse*" . This was the value given in documentation and the
FT blog. I had changed it to a different name to make it more readable, and
seems it got
Have you verified that the documented reference example functions as
expected on your system? If so, then incrementally morph it towards your
own code to discover exactly at which stage the problem occurs. Or just
having the reference example side by side with your own code/schema/table
will help h
I had verified that it works on a 2-node cluster where one is setup as
online, and the other as search. That's on our customer env where I don't
have full access, and this is the only difference I could see so far.
On Mar 18, 2016 8:15 PM, "Jack Krupansky" wrote:
> Have you verified that the docu
On Sat, Nov 10, 2012 at 6:16 PM, Drew Kutcharian wrote:
> Thanks Rob, this makes sense. We only have one rack at this point, so I think
> it'd be better to start with PropertyFileSnitch to make Cassandra think that
> these nodes each are in a different rack without having to put them on
> diffe
Thanks Rob, this makes sense. We only have one rack at this point, so I think
it'd be better to start with PropertyFileSnitch to make Cassandra think that
these nodes each are in a different rack without having to put them on
different subnets. And I will have more flexibility (at the cost of ke
On Mon, Nov 5, 2012 at 12:23 PM, Drew Kutcharian wrote:
>> Switching from SimpleStrategy to RackAware can be a pain.
>
> Can you elaborate a bit? What would be the pain point?
If you don't maintain the same replica placement vis a vis nodes on
your cluster, you have to dump and reload.
Simple ex
I understand that with one node we will have no HA, but since we are just
starting out we wanted to see what would be the bare minimum to go to
production with and as we see traction we can add more nodes.
> Switching from SimpleStrategy to RackAware can be a pain.
Can you elaborate a bit? What
Should be fine if one node can deal with your read and write load.
Switching from SimpleStrategy to RackAware can be a pain. That¹s a
potential growth point way down the line (if you ever have your nodes on
different switches). You might want to just setup your keyspace as
RackAware if you intend t
On Mon, Nov 5, 2012 at 12:49 PM, Drew Kutcharian wrote:
> Hey Guys,
>
> What should I look out for when deploying a single node installation? We want
> to launch a product that uses Cassandra and since we are going to have very
> little load initially, we were thinking of just going live with on
> Even more: if you enable read repair the chances of having bad writes
> decreases for any further reads. This will make your cluster become faster
> consistent again after some failure.
Under 1.0 the default RR probability was reduced to 10%. Because Hinted Handoff
was changed to also store h
" By default Cassandra tries to write to both nodes, always. Writes will
only fail (on a node) if it is down, and even then hinted handoff will
attempt to keep both nodes in sync when the troubled node comes back up.
The point of having two nodes is to have read and write availability in the
face o
Doing reads and writes at CL=1 with RF=2 N=2 does not imply that the reads
will be inconsistent. It's more complicated than the simple counting of
blocked replicas. It is easy to support the notion that it will be largely
consistent, in fact very consistent for most use cases.
By default Cassandra
You'll need to either read or write at at least quorum to get consistent
data from the cluster so you may as well do both.
Now that you mention it, I was wrong about downtime, with a two node
cluster reads or writes at quorum will mean both nodes need to be online.
Perhaps you could have an emergen
Thanks for the comments, I guess I will end up doing a 2 node cluster with
replica count 2 and read consistency 1.
-- Drew
On Mar 15, 2012, at 4:20 PM, Thomas van Neerijnen wrote:
> So long as data loss and downtime are acceptable risks a one node cluster is
> fine.
> Personally this is usual
So long as data loss and downtime are acceptable risks a one node cluster
is fine.
Personally this is usually only acceptable on my workstation, even my dev
environment is redundant, because servers fail, usually when you least want
them to, like for example when you've decided to save costs by wai
Hi Drew,
One other disadvantage is the lack of "consistency level" and
"replication". Both ware part of the high availability / redundancy. So you
would really need to backup your single-node-"cluster" to some other
external location.
Good luck!
2012/3/15 Drew Kutcharian
> Hi,
>
> We are worki
Just solved it. I’m using localhost for the listen_address, 0.0.0.0 for the
rpc_address, and 127.0.0.1 for the seeds.
Cheers,
Steve
From: Vijay [mailto:vijay2...@gmail.com]
Sent: Thursday, December 08, 2011 2:15 PM
To: user@cassandra.apache.org
Subject: Re: Single node
You can add a DNS entry
You can add a DNS entry with multiple IP's or something like a elastic ip
which will keep switching between the active machines. or you can also
write your custom seed provider class. Not sure if you will get a quorum
when there dev's are on vacation :)
Regards,
On Thu, Dec 8, 2011 at 11:05 AM
You are right, our write traffic indeed is pretty tense as we are now at the
stage of initializing data.
Then we do need some more nodes here.
Thanks very much Martin.
On Thu, Jun 10, 2010 at 9:04 PM, Dr. Martin Grabmüller <
martin.grabmuel...@eleven.de> wrote:
> Your problem is probably not th
Your problem is probably not the amount of data you store, but the number of
SSTable files. When these increase, read latency goes up. Write latency maybe
goes up because of compaction. Check in the data directory, whether there are
many
data files, and check via JMX whether compaction is happe
24 matches
Mail list logo