Hello again,
Back to this after a while...
As far as I can tell whenever DC2 is unavailable, there is one node from
DC1 that acts as a coordinator. When DC2 is available again, this one node
sends the hints to only one node at DC2, which then sends any replicas to
the other nodes in the local DC
Hello Matt,
nodetool status:
Datacenter: MAN
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Owns (effective) Host ID Token Rack
UN 10.2.1.103 89.34 KB 99.2% b7f8bc93-bf39-475c-a251-8fbe2c7f7239
-9211685935328163899 RAC1
UN 10.2.1.102 86.32 KB 0.7% 1f8937e1-9
Thanks Vasileios. I think I need to make a call as to whether to switch to
vnodes or stick with tokens for my Multi-DC cluster.
Would you be able to show a nodetool ring/status from your cluster to see
what the token assignment looks like ?
Thanks
Matt
On Wed, Jun 4, 2014 at 8:31 AM, Vasileio
I should have said that earlier really... I am using 1.2.16 and Vnodes
are enabled.
Thanks,
Vasilis
--
Kind Regards,
Vasileios Vlachos
Thanks for your responses!
Matt, I did a test with 4 nodes, 2 in each DC and the answer appears to
be yes. The tokens seem to be unique across the entire cluster, not just
on a per DC basis. I don't know if the number of nodes deployed is
enough to reassure me, but this is my conclusion for no
On Fri, May 30, 2014 at 4:08 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> Basically you sort of confirmed that if down_time > max_hint_window_in_ms
> the only way to bring DC1 up-to-date is anti-entropy repair.
>
Also, read repair does not help either as we assumed that down_time
Hi Vasilis,
With regards to Question 2.
* | How tokens are being assigned when adding a 2nd DC? Is the
range -2^64 to 2^63 for each DC, or it is -2^64 to 2^63 for the entire
cluster? (I think the latter is correct), *
Have you been able to deduce an answer to this (assuming Murmur3
Part
Thanks for your responses, Ben thanks for the link.
Basically you sort of confirmed that if down_time > max_hint_window_in_ms
the only way to bring DC1 up-to-date is anti-entropy repair. Read
consistency level is irrelevant to the problem I described as I am reading
LOCAL_QUORUM. In this situation
Short answer:
If time elapsed > max_hint_window_in_ms then hints will stop being created. You
will need to rely on your read consistency level, read repair and anti-entropy
repair operations to restore consistency.
Long answer:
http://www.slideshare.net/jasedbrown/understanding-antientropy-in-
When one node or DC is down, coordinator nodes being written through will
notice this fact and store hints (hinted handoff is the mechanism), and
those hints are used to send the data that was not able to be replicated
initially.
http://www.datastax.com/dev/blog/modern-hinted-handoff
-Tupshin
On
Hello All,
We have plans to add a second DC to our live Cassandra environment.
Currently RF=3 and we read and write at QUORUM. After adding DC2 we are
going to be reading and writing at LOCAL_QUORUM.
If my understanding is correct, when a client sends a write request, if
the consistency leve
11 matches
Mail list logo