Thanks all for your support.
I executed the discussed process (barring repair, as table was read for
reporting only) and it worked fine in production.
Regards
Manish
>
The risk is you violate consistency while you run repair
Assume you have three replicas for that range, a b c
At some point b misses a write, but it’s committed on a and c for quorum
Now c has a corrupt sstable
You empty c and bring it back with no data and start repair
Then the app reads at q
Thanks Jeff for your response.
Do you see any risk in following approach
1. Stop the node.
2. Remove all sstable files from
*/var/lib/cassandra/data/keyspace/tablename-23dfadf32adf33d33s333s33s3s33 *
directory.
3. Start the node.
4. Run full repair on this particular table
I wanted to go thi
Agree this is both strictly possible and more common with LCS. The only
thing that's strictly correct to do is treat every corrupt sstable
exception as a failed host, and replace it just like you would a failed
host.
On Thu, Feb 13, 2020 at 10:55 PM manish khandelwal <
manishkhandelwa...@gmail.co
Thanks Erick
I would like to explain how data resurrection can take place with single
SSTable deletion.
Consider this case of table with Levelled Compaction Strategy
1. Data A written a long time back.
2. Data A is deleted and tombstone is created.
3. After GC grace tombstone is purgeable.
4. No
The log shows that the the problem occurs when decompressing the SSTable
but there's not much actionable info from it.
I would like to know what will be "ordinary hammer" in this case. Do you
> want to suggest that deleting only corrupt sstable file ( in this case
> mc-1234-big-*.db) would be su
Hi Erick
Thanks for your quick response. I have attached the full stacktrace which
show exception during validation phase of table repair.
I would like to know what will be "ordinary hammer" in this case. Do you
want to suggest that deleting only corrupt sstable file ( in this case
*mc-1234-big
It will achieve the outcome you are after but I doubt anyone would
recommend that approach. It's like using a sledgehammer when an ordinary
hammer would suffice. And if you were hitting some bug then you'd run into
the same problem anyway.
Can you post the full stack trace? It might provide us som
Hi Eric
Thanks for reply.
Reason for corruption is unknown to me. I just found the corrupt table when
scheduled repair failed with logs showing
*ERROR [ValidationExecutor:16] 2020-01-21 19:13:18,123
CassandraDaemon.java:228 - Exception in thread
Thread[ValidationExecutor:16,1,main]org.apach
You need to stop C* in order to run the offline sstable scrub utility.
That's why it's referred to as "offline". :)
Do you have any idea on what caused the corruption? It's highly unusual
that you're thinking of removing all the files for just one table.
Typically if the corruption was a result of
Hi
I see a corrupt SSTable in one of my keyspace table on one node. Cluster is
3 nodes with replication 3. Cassandra version is 3.11.2.
I am thinking on following lines to resolve the corrupt SSTable issue.
1. Run nodetool scrub.
2. If step 1 fails, run offline sstabablescrub.
3. If step 2 fails,
11 matches
Mail list logo