Looks like it is harmless -- Scrub would write a zero-length row when
tombstones expire and there is nothing left, instead of writing no row
at all. Fix attached to the jira ticket.
On Tue, Mar 8, 2011 at 8:58 PM, Jonathan Ellis wrote:
> It *may* be harmless depending on where those zero-length r
Turn on debug logging and see if the output looks like what I posted
to https://issues.apache.org/jira/browse/CASSANDRA-2296
It *may* be harmless depending on where those zero-length rows are
coming from. I've added asserts to 0.7 branch that fire if we attempt
to write a zero-length row, so if t
alienth on irc is reporting the same error. His path was 0.6.8 to
0.7.1 to 0.7.3.
It's probably a bug in scrub. If we can get an sstable exhibiting the
problem posted here or on Jira that would help troubleshoot.
On Tue, Mar 8, 2011 at 10:31 AM, Karl Hiramoto wrote:
> On 08/03/2011 17:09, Jona
On 03/08/11 21:45, Sylvain Lebresne wrote:
> Did you run scrub as soon as you updated to 0.7.3 ?
>
Yes, whithin a few minutes of starting up 0.7.3 on the node
> And did you had problems/exceptions before running scrub ?
Not sure.
> If yes, did you had problems with only 0.7.3 or also with 0.7.2 ?
I had similar errors in late 0.7.3 releases related to testing I did for the
mails with subject "Argh: Data Corruption (LOST DATA) (0.7.0)".
I do not see these corruptions or the above error anymore with 0.7.3 release
as long as the dataset is created from scratch. The patch (2104) mentioned
in th
Did you run scrub as soon as you updated to 0.7.3 ?
And did you had problems/exceptions before running scrub ?
If yes, did you had problems with only 0.7.3 or also with 0.7.2 ?
If the problems started with running scrub, since it takes a snapshot
before running, can you try restarting a test clus
On 08/03/2011 17:09, Jonathan Ellis wrote:
No.
What is the history of your cluster?
It started out as 0.7.0 - RC3 And I've upgraded 0.7.0, 0.7.1, 0.7.2,
0.7.3 within a few days after each was released.
I have 6 nodes about 10GB of data each RF=2. Only one CF every
row/column has a T
No.
What is the history of your cluster?
On Tue, Mar 8, 2011 at 5:34 AM, Karl Hiramoto wrote:
> I have 1000's of these in the log is this normal?
>
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than
> entire row size
> at
> org.apache.cassandra.io.sstable.SSTa
I have 1000's of these in the log is this normal?
java.io.IOError: java.io.EOFException: bloom filter claims to be longer
than entire row size
at
org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:117)
at
org.apache.cassandra.db.CompactionMan