HH in 1.X+ is very good, but it is still an optimisation for achieving
consistency.
>> So I expect that even if I loose some HH then some other replica will reply
>> with data. Is it correct?
Run a repair and see.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http:/
On 04/11/2012 12:04 PM, ruslan usifov wrote:
HH - this is hinted handoff?
Yes
2012/4/11 Igor mailto:i...@4friends.od.ua>>
On 04/11/2012 11:49 AM, R. Verlangen wrote:
Not everything, just HH :)
I hope this works for me for the next reasons: I have quite large
RF (6 datacente
HH - this is hinted handoff?
2012/4/11 Igor
> On 04/11/2012 11:49 AM, R. Verlangen wrote:
>
> Not everything, just HH :)
>
> I hope this works for me for the next reasons: I have quite large RF (6
> datacenters, each carry one replica of all dataset), read and write at CL
> ONE, relatively smal
On 04/11/2012 11:49 AM, R. Verlangen wrote:
Not everything, just HH :)
I hope this works for me for the next reasons: I have quite large RF (6
datacenters, each carry one replica of all dataset), read and write at
CL ONE, relatively small TTL - 10 days, I have no deletes, servers
almost never
Well, if everything works 100% at any time there should be nothing to
repair, however with a distributed cluster it would be pretty rare for that
to occur. At least that is how I interpret this.
2012/4/11 Igor
> BTW, I heard that we don't need to run repair if all your data have TTL,
> all HH w
BTW, I heard that we don't need to run repair if all your data have TTL,
all HH works, and you never delete your data.
On 04/11/2012 11:34 AM, ruslan usifov wrote:
Sorry fo my bad english, so QUORUM allow doesn't make repair
regularity? But form your anser it does not follow
2012/4/11 R. Ve
Sorry fo my bad english, so QUORUM allow doesn't make repair regularity?
But form your anser it does not follow
2012/4/11 R. Verlangen
> Yes, I personally have configured it to perform a repair once a week, as
> the GCGraceSeconds is at 10 days.
>
> This is also what's in the manual
> http://wi
Yes, I personally have configured it to perform a repair once a week, as
the GCGraceSeconds is at 10 days.
This is also what's in the manual
http://wiki.apache.org/cassandra/Operations#Repairing_missing_or_inconsistent_data
(point
2)
2012/4/11 ruslan usifov
> Hello
>
> I have follow question, i
Hello
I have follow question, if we Read and write to cassandra claster with
QUORUM consistency level, does this allow to us do not call nodetool repair
regular? (i.e. every GCGraceSeconds)