Hi all, I'm getting the following error when executing a repair on a table:
error: Repair job has failed with the error message: Repair command #4
failed with error Did not get replies from all endpoints.. Check the
logs on the repair participants for further details
-- Stack
Hi Everyone,
1. We are currently facing a data discrepancy issue where a UDT
(User-Defined Type) column is returning different values across multiple
data centers. We are running on DSE 6.9.6 on Cassandra 3.11
2. To resolve this, we have already attempted a full repair and a
-pr
Hi Team,
When we are trying to execute a repair with token range on cassandra
3.11.13 it is getting failed with below errors.
java.lang.RuntimeException: Repair job has failed with the error message:
[2025-04-16 13:18:15,587] Some repair failed
at org.apache.cassandra.tools.RepairRunner.progress
(CassandraTableRepairManager.java:74)
If you check this:
https://github.com/apache/cassandra/blob/cassandra-4.0/src/java/org/apache/cassandra/db/repair/CassandraTableRepairManager.java#L72
There is if (force || !cfs.snapshotExists…)
So if “force” is “false”, which is the case in case repair is global
witch
into IR sstables with more caveats. Probably worth a jira to add a faster
solution
On Thu, Feb 15, 2024 at 12:50 PM Kristijonas Zalys wrote:
> Hi folks,
>
> One last question regarding incremental repair.
>
> What would be a safe approach to temporarily stop running incre
running out of disk space, and you should address
that issue first before even considering upgrading Cassandra.
On 15/02/2024 18:49, Kristijonas Zalys wrote:
Hi folks,
One last question regarding incremental repair.
What would be a safe approach to temporarily stop running incremental
repair
Hi folks,
One last question regarding incremental repair.
What would be a safe approach to temporarily stop running incremental
repair on a cluster (e.g.: during a Cassandra major version upgrade)? My
understanding is that if we simply stop running incremental repair, the
cluster's nodes ca
In a two datacenter cluster (11 nodes each) we are seeing repair getting
stuck. Issue is when repair is triggered on a particular keyspace repair
session is lost and cassandra never returns for that particular session.
There are no "WARN" or "ERROR" logs in Cassandra logs. No
The over-streaming is only problematic for the repaired SSTables, but it
can be triggered by inconsistencies within the unrepaired SSTables
during an incremental repair session. This is because although an
incremental repair will only compare the unrepaired SSTables, but it
will stream both
Thank you very much for your explanation.
Streaming happens on the token range level, not the SSTable level, right? So,
when running an incremental repair before the full repair, the problem that
“some unrepaired SSTables are being marked as repaired on one node but not on
another” should not
Unfortunately repair doesn't compare each partition individually.
Instead, it groups multiple partitions together and calculate a hash of
them, stores the hash in a leaf of a merkle tree, and then compares the
merkle trees between replicas during a repair session. If any one of the
parti
> Caution, using the method you described, the amount of data streamed at the
> end with the full repair is not the amount of data written between stopping
> the first node and the last node, but depends on the table size, the number
> of partitions written, their distribution in
Caution, using the method you described, the amount of data streamed at
the end with the full repair is not the amount of data written between
stopping the first node and the last node, but depends on the table
size, the number of partitions written, their distribution in the ring
and the
> That's a feature we need to implement in Reaper. I think disallowing the
> start of the new incremental repair would be easier to manage than pausing
> the full repair that's already running. It's also what I think I'd expect as
> a user.
>
> I'l
> Full repair running for an entire week sounds excessively long. Even if
> you've got 1 TB of data per node, 1 week means the repair speed is less than
> 2 MB/s, that's very slow. Perhaps you should focus on finding the bottleneck
> of the full repair speed and work on t
Just one more thing. Make sure you run 'nodetool repair -full' instead
of just 'nodetool repair'. That's because the command's default was
changed in Cassandra 2.x. The default was full repair before that
change, but the new default now is incremental repair.
O
Not disabling auto-compaction may result in repaired SSTables getting
compacted together with unrepaired SSTables before the repair state is
set on them, which leads to mismatch in the repaired data between nodes,
and potentially very expensive over-streaming in a future full repair.
You
t in Reaper. I think disallowing the
> start of the new incremental repair would be easier to manage than pausing
> the full repair that's already running. It's also what I think I'd expect
> as a user.
>
> I'll create an issue to track this.
>
> Le sam. 3 févr. 2024,
Hi Sebastian,
That's a feature we need to implement in Reaper. I think disallowing the
start of the new incremental repair would be easier to manage than pausing
the full repair that's already running. It's also what I think I'd expect
as a user.
I'll create an issue
Full repair running for an entire week sounds excessively long. Even if
you've got 1 TB of data per node, 1 week means the repair speed is less
than 2 MB/s, that's very slow. Perhaps you should focus on finding the
bottleneck of the full repair speed and work on that instead.
On
Hi,
> 2. use an orchestration tool, such as Cassandra Reaper, to take care of that
> for you. You will still need monitor and alert to ensure the repairs are run
> successfully, but fixing a stuck or failed repair is not very time sensitive,
> you can usually leave it till Monday m
Hi Kristijonas,
It is not possible to run two repairs, regardless whether they are
incremental or full, for the same token range and on the same table
concurrently. You have two options:
1. create a schedule that's don't overlap, e.g. run incremental repair
daily except the 1
They(incremental and full repairs) are required to run separately at
different times. You need to identify a schedule, for example, running
incremental repairs every week for 3 weeks and then run full repair in the
4th week.
Regards
Manish
On Sat, Feb 3, 2024 at 7:29 AM Kristijonas Zalys wrote
Hi Bowen,
Thank you for your help!
So given that we would need to run both incremental and full repair for a
given cluster, is it safe to have both types of repair running for the same
token ranges at the same time? Would it not create a race condition?
Thanks,
Kristijonas
On Fri, Feb 2, 2024
Hi Kristijonas,
To answer your questions:
1. It's still necessary to run full repair on a cluster on which
incremental repair is run periodically. The frequency of full repair is
more of an art than science. Generally speaking, the less reliable the
storage media, the more frequently
Hi folks,
I am working on switching from full to incremental repair in Cassandra
v4.0.6 (soon to be v4.1.3) and I have a few questions.
1.
Is it necessary to run regular full repair on a cluster if I already run
incremental repair? If yes, what frequency would you recommend for full
,
> "sstablemetadata" and "sstabledump" commands handy.
>
>
> On 23/01/2024 18:07, manish khandelwal wrote:
>
> In one of our two datacenter setup(3+3), one Cassndra node is getting lot
> of data streamed from other nodes during repair to the extent that
t; and "sstabledump" commands handy.
On 23/01/2024 18:07, manish khandelwal wrote:
In one of our two datacenter setup(3+3), one Cassndra node is getting
lot of data streamed from other nodes during repair to the extent that
it fills up and ends with full disk. I am not able to understan
actually already present –
just in the other set of SSTables.
> Am 23.01.2024 um 19:07 schrieb manish khandelwal
> :
>
> In one of our two datacenter setup(3+3), one Cassndra node is getting lot of
> data streamed from other nodes during repair to the extent that it fills up
> and e
In one of our two datacenter setup(3+3), one Cassndra node is getting lot
of data streamed from other nodes during repair to the extent that it fills
up and ends with full disk. I am not able to understand what could be the
reason that this node is misbehaving in the cluster. Cassandra version is
Hi Jeff,
Does subrange repair mark the SSTable as repaired? From my memory, it
doesn't.
Regards,
Bowen
On 27/11/2023 16:47, Jeff Jirsa wrote:
I don’t work for datastax, thats not my blog, and I’m on a phone and
potentially missing nuance, but I’d never try to convert a cluster to
era.
Instead I’d leave compaction running and slowly run incremental repair across
parts of the token range, slowing down as pending compactions increase
I’d choose token ranges such that you’d repair 5-10% of the data on each node
at a time
> On Nov 23, 2023, at 11:31 PM, Sebast
to run the full repair for your entire cluster,
not each node. Depending on the number of nodes in your cluster, each
node should take significantly less time than that unless you have RF
set to the total number of nodes. Keep in mind that you only need to
disable the auto-compaction for the dur
period of
time (if you are interested in the reasons why, they are at the end of this
e-mail).
Therefore, I am wondering whether a slighly different process might work better
for us:
1. Run a full repair (we periodically run those anyway).
2. Mark all SSTables as repaired, even though they will
: Unable to take a
snapshot bec3dba0-7d70-11ee-99d3-7bda513c2b90 on test_keyspace/test1
This behavior is reproduced consistently, when the following are true:
* It is a normal sequential repair (--full and --sequential),
* It is not a global repair, meaning at least one datacenter is
ndra.io.util.CompressedChunkReader$Mmap.readChunk(CompressedChunkReader.java:221)
> ... 46 common frames omitted
> Caused by: org.apache.cassandra.io.compress.CorruptBlockException:
> (/data/3/cassandra/data/doc/source_correlations-4ce2d9f0912b11edbd6d4d9b3bfd78b2/nb-9816-big-Data.db)
at 604552 of length 7911.
at
org.apache.cassandra.io.util.CompressedChunkReader$Mmap.readChunk(CompressedChunkReader.java:209)
Ideas?
-Joe
On 8/7/2023 10:27 PM, manish khandelwal wrote:
What logs of /172.16.20.16:7000 <http://172.16.20.16:7000/> say when
repair failed. It ind
What logs of /172.16.20.16:7000 say when repair failed. It indicates
"validation failed". Can you check system.log for /172.16.20.16:7000 and
see what they say. Looks like you have some issue with *doc/origdoc,
probably some corrupt sstable. *Try to run repair for individual table a
Thank you. I've tried:
nodetool repair --full
nodetool repair -pr
They all get to 57% on any of the nodes, and then fail. Interestingly
the debug log only has INFO - there are no errors.
[2023-08-07 14:02:09,828] Repair command #6 failed with error
Incremental repair session 83dc17d0
Quick drive-by observation:
> Did not get replies from all endpoints.. Check the
> logs on the repair participants for further details
> dropping message of type HINT_REQ due to error
> org.apache.cassandra.net.AsyncChannelOutputPlus$FlushException: The
> channel this output str
but it has hung. I tried to run:
> nodetool repair -pr
> on each of the nodes, but they all fail with some form of this error:
>
> error: Repair job has failed with the error message: Repair command #521
> failed with error Did not get replies from all endpoints.. Check the
&g
Hi All - been using reaper to do repairs, but it has hung. I tried to run:
nodetool repair -pr
on each of the nodes, but they all fail with some form of this error:
error: Repair job has failed with the error message: Repair command #521
failed with error Did not get replies from all endpoints
Hello.
I had the same issues on full repair. I've checked on various GC
settings, the most performant is ZGC on Java 11, but I had some
stability issues. I left G1GC settings from 3.11.x and got the same
issues as yours: CPU load over 90 %, and growing count of open file
descriptors (up t
Hi Everyone
We have migrated some of our clusters from Cassandra 3.11.11 to 4.0.1. We
do repairs periodically triggered by some automation. Each time we run
repair we do full `-full` sequential `-seq` primary `-pr` repairs for a
portion of the full ring range and we finish iterating over the full
s a [-pr -full] repair. I think
> you're confusing the concept of a full repair vs incremental. This document
> might help you understand the concepts --
> https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/operations/opsRepairNodesManualRepair.html.
> Cheers!
>
>>
No, I'm just saying that [-pr] is the same as [-pr -full], NOT the same as
just [-full] on its own. Primary range repairs are not compatible with
incremental repairs so by definition, -pr is a [-pr -full] repair. I think
you're confusing the concept of a full repair vs incremental. Thi
Thanks Erick for the response. So in option 3, -pr is not taken into
consideration which essentially means option 3 is the same as option 1
(which is the full repair).
Right, just want to be sure?
Best,
Deepak
On Tue, Sep 7, 2021 at 3:41 PM Erick Ramirez
wrote:
>
>1. Will perform
1. Will perform a full repair vs incremental which is the default in
some later versions.
2. As you said, will only repair the token range(s) on the node for
which it is a primary owner.
3. The -full flag with -pr is redundant -- primary range repairs are
always done as a full
Hi There,
We are on Cassandra 3.0.11 and I want to understand what is the
difference between following two commands
1. nodetool repair -full
2. nodetool repair -pr
3. nodetool repair -full -pr
As per my understanding 1. will do the full repair across all keyspaces. 2.
with -pr, restricts repair
aper.
>>
>> Thanks,
>> Jim
>>
>> On Mon, Aug 2, 2021 at 7:12 PM Amandeep Srivastava <
>> amandeep.srivastava1...@gmail.com> wrote:
>>
>>> Can anyone please help with the above questions? To summarise:
>>>
>>> 1) What is the i
High inter-dc latency could make writes more likely not to land, which
would make repair do more work.
Also true for read and writes - waiting for the cross-dc request will keep
threads around longer, so more concurrent work, so more GC.
May be that the GC is coming from the read/write path, and
Can inter dc latency cause high gc pauses? Other clusters working fine with
same configuration? Only this particular cluster is giving long GC pauses
during repair.
Regards
Manish
On Tue, Aug 3, 2021 at 6:42 PM Jim Shaw wrote:
> CMS heap too large will have long GC. you may try reduce heap
the above questions? To summarise:
>>
>> 1) What is the impact of using mmap only for indices besides a
>> degradation in read performance?
>> 2) Why does the off heap consumed during Cassandra full repair remains
>> occupied 12+ hours after the repair completion and
CMS heap too large will have long GC. you may try reduce heap on 1 node to
see. or go GC1 if it is easy way.
Thanks,
Jim
On Tue, Aug 3, 2021 at 3:33 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Long GC (1 seconds /2 seconds) pauses seen during repair on the
>
t of using mmap only for indices besides a degradation
> in read performance?
> 2) Why does the off heap consumed during Cassandra full repair remains
> occupied 12+ hours after the repair completion and is there a
> manual/configuration driven way to clear that earlier?
>
> Thanks
Long GC (1 seconds /2 seconds) pauses seen during repair on the
coordinator. Running full repair with partition range option. GC collector
is CMS and heap is 14G. Cluster is 7+7. Cassandra version is 3.11.2. Not
much traffic when repair is running. What could be the probable cause of
long gc
mpact of using mmap only for indices besides a
>> degradation in read performance?
>> 2) Why does the off heap consumed during Cassandra full repair remains
>> occupied 12+ hours after the repair completion and is there a
>> manual/configuration driven way to clea
ease help with the above questions? To summarise:
>
> 1) What is the impact of using mmap only for indices besides a degradation
> in read performance?
> 2) Why does the off heap consumed during Cassandra full repair remains
> occupied 12+ hours after the repair completion and is there
Can anyone please help with the above questions? To summarise:
1) What is the impact of using mmap only for indices besides a degradation
in read performance?
2) Why does the off heap consumed during Cassandra full repair remains
occupied 12+ hours after the repair completion and is there a
anted to understand the role of the
> heap and off-heap memory separately during the process.
>
> Also, for my case, once the nodes reach the 95% memory usage, it stays
> there for almost 10-12 hours after the repair is complete, before falling
> back to 65%. Any pointers on what might
exists one? Wanted to understand the role of the
heap and off-heap memory separately during the process.
Also, for my case, once the nodes reach the 95% memory usage, it stays
there for almost 10-12 hours after the repair is complete, before falling
back to 65%. Any pointers on what might be consumin
Based on the symptoms you described, it's most likely caused by SSTables
being mmap()ed as part of the repairs.
Set `disk_access_mode: mmap_index_only` so only index files get mapped and
not the data files. I've explained it in a bit more detail in this article
-- https://community.datastax.com/qu
Could it be related to
https://issues.apache.org/jira/browse/CASSANDRA-14096 ?
On 28/07/2021 13:55, Amandeep Srivastava wrote:
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I'm running a full repair on DC2 nodes - one node and one keyspace at a
time. During the repair, ram usage on all 4 nodes spike up
>
> Oh. So our data is all messed up now because of the “nodetool compact” I
> ran.
>
>
>
> Hi Erick. Thanks for the quick reply.
>
>
>
> I just want to be sure about compact. I saw Cassandra will do compaction
> by itself even when I do not run “nodetool compact” manually (nodetool
> compaction
ache.org
Subject: Re: TWCS repair and compact help
You definitely shouldn't perform manual compactions -- you should let the
normal compaction tasks take care of it. It is unnecessary to manually run
compactions since it creates more problems than it solves as I've exp
Hi,
On Tue, Jun 29, 2021 at 12:34 PM Erick Ramirez
wrote:
> You definitely shouldn't perform manual compactions -- you should let the
> normal compaction tasks take care of it. It is unnecessary to manually run
> compactions since it creates more problems than it solves as I've explained
> in th
You definitely shouldn't perform manual compactions -- you should let the
normal compaction tasks take care of it. It is unnecessary to manually run
compactions since it creates more problems than it solves as I've explained
in this post -- https://community.datastax.com/questions/6396/. Cheers!
Hi:
We need some help on cassandra repair and compact for a table that uses TWCS.
We are running cassandra 4.0-rc1. A database called test_db, biggest table
"minute_rate", storing time-series data. It has the following configuration:
CREATE TABLE test_db.minute_rate (
marke
t. You don't want to be issuing simultaneous create
>>> statements from different clients. IF NOT EXISTS won't necessarily catch
>>> all cases.
>>>
>>>> As for the schema mismatch, what is the best way of fixing that issue?
>>>>
eate statements from different clients. IF NOT EXISTS won't necessarily
>>> catch all cases.
>>>
>>>
>>>> As for the schema mismatch, what is the best way of fixing that issue?
>>>> Could Cassandra recover from that on its own or is there a nod
it seems a very heavy procedure for that.
A rolling restart is usually enough to fix the issue. You
might want to repair afterwards, and check that data didn't
make it to different versions of the table on different nodes
(in which case some more int
all cases.
>>
>>
>>> As for the schema mismatch, what is the best way of fixing that issue?
>>> Could Cassandra recover from that on its own or is there a nodetool command
>>> to force schema agreement? I have heard that we have to restart the nodes 1
&g
t way of fixing that issue?
>> Could Cassandra recover from that on its own or is there a nodetool command
>> to force schema agreement? I have heard that we have to restart the nodes 1
>> by 1, but it seems a very heavy procedure for that.
>>
> A rolling restart is usually
ing that issue?
> Could Cassandra recover from that on its own or is there a nodetool command
> to force schema agreement? I have heard that we have to restart the nodes 1
> by 1, but it seems a very heavy procedure for that.
>
A rolling restart is usually enough to fix the issue. You
OK I will check that, thank you!
Sébastien
Le jeu. 27 mai 2021 à 11:07, Bowen Song a écrit :
> Hi Sébastien,
>
>
> The error message you shared came from the repair coordinator node's
> log, and it's the result of failures reported by 3 other nodes. If you
> coul
Hi Sébastien,
The error message you shared came from the repair coordinator node's
log, and it's the result of failures reported by 3 other nodes. If you
could have a look at the 3 nodes listed in the error message -
135.181.222.100, 135.181.217.109 and 135.181.221.180, you shou
Sorry Kane, I am a little bit confused, we are talking about schema version
at node level.
Which client operations could trigger schema change at node level? Do you
mean that for ex creating a new table trigger a schema change globally, not
only at KS/table single level?
Sébastien
Le jeu. 27 mai
I don't have schema changes, except keyspaces and tables creations. But
they are done from multiple sources indeed. With a "create if not exists"
statement, on demand. Thanks you for your answer, I will try to see if I
could precreate them then.
As for the schema mismatch, what is the best way of
>
> I have had that error sometimes when schema mismatch but also when all
> schema match. So I think this is not the only cause.
>
Have you checked the logs for errors on 135.181.222.100, 135.181.217.109,
and 135.181.221.180? They may give you some better information about why
they are sending bad
chema mismatch or node unavailability might result in this.
>
> Thanks,
>
> Dipan Shah
>
> --
> *From:* Sébastien Rebecchi
> *Sent:* Wednesday, May 26, 2021 7:35 PM
> *To:* user@cassandra.apache.org
> *Subject:* unable to repair
>
> Hi,
>
&
M
To: user@cassandra.apache.org
Subject: unable to repair
Hi,
I have an issue with repairing my Casandra cluster, that was already the case
with Cassandra 3 and the issue is not solved with Cassandra 4 RC1.
I run in a for loop, one 1 by 1, the following command:
nodetool -h THE_NODE -u jTHE_USER -pw THE_PASSW
Hi,
I have an issue with repairing my Casandra cluster, that was already the
case with Cassandra 3 and the issue is not solved with Cassandra 4 RC1.
I run in a for loop, one 1 by 1, the following command:
nodetool -h THE_NODE -u jTHE_USER -pw THE_PASSWORD repair --full -pr
and I always get the
Thanks for all your suggestions!
I'm looking into it and so far it seems to be mainly a problem of disk
I/O, as the host is running on spindle disks and being a DR of an entire
cluster gives it many changes to follow.
First (easy) try will be to add an SSD as ZFS cache (ZIL + L2ARC).
Should m
remote disaster recovery copy" 2.7 TiB.
>
> Doing repairs only on the production cluster takes a semi-decent time
> (24h for the biggest keyspace, which takes 90% of the space), but by
> doing repair across the two DCs takes forever, and segments often fail
> even if I increased R
e), but by
doing repair across the two DCs takes forever, and segments often fail
even if I increased Reaper segment time limit to 2h.
In trying to debug the issue, I noticed that "compactionstats -H" on the
DR node shows huge (and very very slow) validations:
compaction completed total
Kane also mentioned) for
subrange repair. Doing subrange repair yourself may lead to a lot
of trouble as calculating correct subranges is not an easy task.
On Tue, Mar 23, 2021 at 3:38 AM Kane Wilson wrote:
-pr on all nodes takes much longer as you'll do at least
n Mon, 22 Mar 2021 at 20:28, manish khandelwal <
> manishkhandelwa...@gmail.com> wrote:
>
>> Also try to use Cassandra reaper (as Kane also mentioned) for subrange
>> repair. Doing subrange repair yourself may lead to a lot of trouble as
>> calculating correct subranges is
Does describering not give the correct sub ranges for each node ?
On Mon, 22 Mar 2021 at 20:28, manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> Also try to use Cassandra reaper (as Kane also mentioned) for subrange
> repair. Doing subrange repair yourself may lead to a lo
Also try to use Cassandra reaper (as Kane also mentioned) for subrange
repair. Doing subrange repair yourself may lead to a lot of trouble as
calculating correct subranges is not an easy task.
On Tue, Mar 23, 2021 at 3:38 AM Kane Wilson wrote:
> -pr on all nodes takes much longer as you
, and managed services
On Tue, Mar 23, 2021 at 7:33 AM Surbhi Gupta
wrote:
> Hi,
>
> We are on open source 3.11.5 .
> We need to repair a production cluster .
> We are using num_token as 256 .
> What will be a better option to run repair ?
> 1. nodetool -pr (Primary rang
Hi,
We are on open source 3.11.5 .
We need to repair a production cluster .
We are using num_token as 256 .
What will be a better option to run repair ?
1. nodetool -pr (Primary range repair on all nodes, one node at a time)
OR
2. nodetool -st -et (Subrange repair , taking the ranges for each
e by one.
To avoid this issue in the future, I'd recommend you avoid causing
Cassandra to do anti-compaction during repairs. You can achieve that by
specifying a DC in the "nodetool repair" command, such as "nodetool
repair -full -dc DC1". This will work even you only
Hello Team,
Sorry for this might be a simple question.
I was working on Cassandra 2.1.14
Node1 -- 4.5 mb data
Node2 -- 5.3 mb data
Node3 -- 4.9 mb data
Node3 was down since 90 days.
I brought it up and it joined the cluster.
To sync data I ran nodetool repair --full
Repair was successful
Hi All,
We are facing problems of failure of Read-Repair stages with error Digest
Mismatch and count is 300+ per day per node.
At the same time, we are experiencing node is getting overloaded for a
quick couple of seconds due to long GC pauses (of around 7-8 seconds). We
are not running a repair
t;> One more query, all are sstables (repaired + unrepaired ) part of
>> anti-compaction? We are using full repair with -pr option.
>>
>> Regards
>> Manish
>>
>> On Mon, Nov 9, 2020 at 11:17 AM Alexander DEJANOVSKI <
>> adejanov...@gmail.com> wrote:
>>
Pushpendra,
Probably you can read all the data using spark with Consistency level ALL
for repairing the data.
Regards
Manish
On Mon, Nov 9, 2020 at 11:31 AM Alexander DEJANOVSKI
wrote:
> Hi,
>
> You have two options to disable anticompaction when running full repair:
>
> - add
Only sstables at unrepaired state go through anticompaction.
Le lun. 9 nov. 2020 à 07:01, manish khandelwal
a écrit :
> Thanks Alex.
>
> One more query, all are sstables (repaired + unrepaired ) part of
> anti-compaction? We are using full repair with -pr option.
>
> Regards
&
Thanks Alex.
One more query, all are sstables (repaired + unrepaired ) part of
anti-compaction? We are using full repair with -pr option.
Regards
Manish
On Mon, Nov 9, 2020 at 11:17 AM Alexander DEJANOVSKI
wrote:
> Hi Manish,
>
> Anticompaction is the same whether you run full or in
Hi,
You have two options to disable anticompaction when running full repair:
- add the list of DCs using the --dc flag (even if there's just a single DC
in your cluster)
- Use subrange repair, which is done by tools such as Reaper (it can be
challenging to do it yourself on a vnode cl
1 - 100 of 2116 matches
Mail list logo