ad caused it, so were happy it wasn’t going to occur in production. Sorry
>> that is not much help, but I am not even sure it is the same issue you have.
>> >
>> > Paul
>> >
>> >
>> >
>> > On 7 May 2019, at 07:14, Roy Burstein wrote:
>> >
gt; >
> > On 7 May 2019, at 07:14, Roy Burstein wrote:
> >
> > I can say that it happens now as well ,currently no node has been
> added/removed .
> > Corrupted sstables are usually the index files and in some machines the
> sstable even does not exist on the filesys
t;
> On 7 May 2019, at 07:14, Roy Burstein wrote:
>
> I can say that it happens now as well ,currently no node has been
> added/removed .
> Corrupted sstables are usually the index files and in some machines the
> sstable even does not exist on the filesystem.
> On one machine I
that is not
much help, but I am not even sure it is the same issue you have.
Paul
> On 7 May 2019, at 07:14, Roy Burstein wrote:
>
> I can say that it happens now as well ,currently no node has been
> added/removed .
> Corrupted sstables are usually the index files and i
I can say that it happens now as well ,currently no node has been
added/removed .
Corrupted sstables are usually the index files and in some machines the
sstable even does not exist on the filesystem.
On one machine I was able to dump the sstable to dump file without any
issue . Any idea how to
Roy,
I have seen this exception before when a column had been dropped then re added
with the same name but a different type. In particular we dropped a column and
re created it as static, then had this exception from the old sstables created
prior to the ddl change.
Not sure if this applies in
can Disk have bad sectors? fccheck or something similar can help.
Long shot: repair or any other operation conflicting. Would leave that to
others.
On Mon, May 6, 2019 at 3:50 PM Roy Burstein wrote:
> It happens on the same column families and they have the same ddl (as
> already posted) . I di
It happens on the same column families and they have the same ddl (as
already posted) . I did not check it after cleanup
.
On Mon, May 6, 2019, 23:43 Nitan Kainth wrote:
> This is strange, never saw this. does it happen to same column family?
>
> Does it happen after cleanup?
>
> On Mon, May 6,
This is strange, never saw this. does it happen to same column family?
Does it happen after cleanup?
On Mon, May 6, 2019 at 3:41 PM Roy Burstein wrote:
> Yes.
>
> On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
>
>> Roy,
>>
>> You mean all nodes show corruption when you add a node to cluster??
Yes.
On Mon, May 6, 2019, 23:23 Nitan Kainth wrote:
> Roy,
>
> You mean all nodes show corruption when you add a node to cluster??
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on all the servers in the cluster every time I
Roy,
You mean all nodes show corruption when you add a node to cluster??
Regards,
Nitan
Cell: 510 449 9629
> On May 6, 2019, at 2:48 PM, Roy Burstein wrote:
>
> It happened on all the servers in the cluster every time I have added node
> .
> This is new cluster nothing was upgraded here , we
tables therethe DDL of the corrupted sstables looks the same:
CREATE TABLE rawdata.a1 (
session_start_time_timeslice bigint,
uid_bucket int,
vid_bucket int,
pid int,
uid text,
sid bigint,
vid bigint,
data_type text,
data_id bigint,
data blob,
PRIMARY KEY
Before you scrub, from which version were you upgrading and can you post a(n
anonymized) schema?
--
Jeff Jirsa
> On May 6, 2019, at 11:37 AM, Nitan Kainth wrote:
>
> Did you try sstablescrub?
> If that doesn't work, you can delete all files of this sstable id and then
> run repair -pr on th
Did you try sstablescrub?
If that doesn't work, you can delete all files of this sstable id and then
run repair -pr on this node.
On Mon, May 6, 2019 at 9:20 AM Roy Burstein wrote:
> Hi ,
> We are having issues with Cassandra 3.11.4 , after adding node to the
> cluster we get many corrupted file
Hi ,
We are having issues with Cassandra 3.11.4 , after adding node to the
cluster we get many corrupted files across the cluster (almost all nodes)
,this is reproducible in our env. .
We have 69 nodes in the cluster ,disk_access_mode: standard .
The stack trace :
WARN [ReadStage-4] 2019-05-06
Another thing to keep in mind is that if you are hitting the issue I
described, waiting 60 seconds will not absolutely solve your problem, it
will only make it less likely to occur. If a memtable has been partially
flushed at the 60 second mark you will end up with the same corrupt sstable.
On F
+1 for tablesnap
On Fri, Mar 28, 2014 at 4:28 PM, Jonathan Haddad wrote:
> I will +1 the recommendation on using tablesnap over EBS. S3 is at least
> predictable.
>
> Additionally, from a practical standpoint, you may want to back up your
> sstables somewhere. If you use S3, it's easy to pull
As I tried to say, EBS snapshots require much care or you get corruption
such as you have encountered.
Does Cassandra quiesce the file system after a snapshot using fsfreeze or
xfs_freeze? Somehow I doubt it...
On Fri, Mar 28, 2014 at 4:17 PM, Jonathan Haddad wrote:
> I have a nagging memory o
I will +1 the recommendation on using tablesnap over EBS. S3 is at least
predictable.
Additionally, from a practical standpoint, you may want to back up your
sstables somewhere. If you use S3, it's easy to pull just the new tables
out via aws-cli tools (s3 sync), to your remote, non-aws server,
I have a nagging memory of reading about issues with virtualization and not
actually having durable versions of your data even after an fsync (within
the VM). Googling around lead me to this post:
http://petercai.com/virtualization-is-bad-for-database-integrity/
It's possible you're hitting this
Robert,
That is what I thought as well. But apparently something is happening. The
only way I can get away with doing this is adding a sleep 60 right after the
nodetool snapshot is executed. I can reproduce this 100% of the time by not
issuing a sleep after nodetool snapshot.
This is the er
On Fri, Mar 28, 2014 at 12:21 PM, Russ Lavoie wrote:
> Thank you for your quick response.
>
> Is there a way to tell when a snapshot is completely done?
>
IIRC, the JMX call blocks until the snapshot completes. It should be done
when nodetool returns.
=Rob
hose
> snapshots and restore them where we want to.
>
>
> https://github.com/synack/tablesnap
>
>
> ?
>
>
> How can I tell when a snapshot is fully complete so I do not have
> corrupted SSTables?
>
>
> SStables are immutable after they are created. I'm n
/synack/tablesnap
?
How can I tell when a snapshot is fully complete so I do not have corrupted
SSTables?
SStables are immutable after they are created. I'm not sure how you're getting
a snapshot that has corrupted SSTables in it. If you can repro reliably, file a
JIRA on issues.
un on ephemeral stripe, not EBS.
> so we can take snapshots of the data and create volumes based on those
> snapshots and restore them where we want to.
>
https://github.com/synack/tablesnap
?
> How can I tell when a snapshot is fully complete so I do not have
> corrupted SSTab
problem at all.
So my assumption is that Cassandra is doing a few more things after the
“nodetool snapshot” returns.
Now that you know what is going on, I have my question.
How can I tell when a snapshot is fully complete so I do not have corrupted
SSTables?
I can reproduce this 100% of the
On Wed, Feb 12, 2014 at 9:20 AM, Francisco Nogueira Calmon Sobral <
fsob...@igcorp.com.br> wrote:
> I've removed the corrupted sstables and 'nodetool repair' ran successfully
> for the column family. I'm not sure whether or not we've lost data.
>
If you r
Rahul.
>
> I've removed the corrupted sstables and 'nodetool repair' ran successfully
> for the column family. I'm not sure whether or not we've lost data.
>
> Best regards,
> Francisco Sobral
>
>
> On Jan 30, 2014, at 3:58 PM, Rahul Menon wrote:
&
Hi, Rahul.
I've removed the corrupted sstables and 'nodetool repair' ran successfully for
the column family. I'm not sure whether or not we've lost data.
Best regards,
Francisco Sobral
On Jan 30, 2014, at 3:58 PM, Rahul Menon wrote:
> Yes should delete all files
Yes should delete all files related to -ib--.db
Run a repair after deletion
On Thu, Jan 30, 2014 at 10:17 PM, Francisco Nogueira Calmon Sobral <
fsob...@igcorp.com.br> wrote:
> Ok. I'll try this idea with one sstable. But, should I delete all the
> files associated with it? I mean, there is a d
Ok. I'll try this idea with one sstable. But, should I delete all the files
associated with it? I mean, there is a difference in the number of files
between the BAD sstable and a GOOD one, as I've already shown:
BAD
--
-rw-r--r-- 8 cassandra cassandra 991M Nov 8 15:11
Sessions-Users-ib-251
Looks like the sstables are corrupt. I dont believe there is a method to
recover those sstables. I would delete them and run a repair to ensure data
consistency.
Rahul
On Wed, Jan 29, 2014 at 11:29 PM, Francisco Nogueira Calmon Sobral <
fsob...@igcorp.com.br> wrote:
> Hi, Rahul.
>
> I've run no
Hi, Rahul.
I've run nodetool upgradesstable only in the problematic CF. It throwed the
following exception:
Error occurred while upgrading the sstables for keyspace Sessions
java.util.concurrent.ExecutionException:
org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException:
d
Francisco,
the sstables with *-ib-* is something that was from a previous version of
c*. The *-ib-* naming convention started at c* 1.2.1 but 1.2.10 onwards im
sure it has the *-ic-* convention. You could try running a nodetool
sstableupgrade which should ideally upgrade the sstables with the *-ib
Dear experts,
We are facing a annoying problem in our cluster.
We have 9 amazon extra large linux nodes, running Cassandra 1.2.11.
The short story is that after moving the data from one cluster to another,
we've been unable to run 'nodetool repair'. It get stuck due to a
CorruptSSTableExceptio
> We are starting use of Cassandra (version 2.0.1), and are doing system level
> tests and have been running into a few issues with ss tables being corrupted
>
> The supposition is that these are caused by:
>
> https://issues.apache.org/jira/browse/CASSANDRA-5202
>
> One example is a corrupted
36 matches
Mail list logo