Kurt,
I cloned the original ticket. The new one is:
https://issues.apache.org/jira/browse/CASSANDRA-14691
I can’t change the Assignee resp. unassign it.
Thanks,
Thomas
From: kurt greaves
Sent: Dienstag, 14. August 2018 04:53
To: User
Subject: Re: Data Corruption due to multiple Cassandra
ort referencing 11540 or re-open 11540?
>
>
>
> Thanks for your help.
>
>
>
> Thomas
>
>
>
> *From:* kurt greaves
> *Sent:* Montag, 13. August 2018 13:24
> *To:* User
> *Subject:* Re: Data Corruption due to multiple Cassandra 2.1 processes?
>
>
started another
Cassandra process. Unwinding it could be very ugly.
Sean Durity
From: kurt greaves
Sent: Monday, August 13, 2018 7:24 AM
To: User
Subject: [EXTERNAL] Re: Data Corruption due to multiple Cassandra 2.1 processes?
Yeah that's not ideal and could lead to problems. I think corru
Thanks Kurt.
What is the proper workflow here to get this accepted? Create a new ticket
dedicated for the backport referencing 11540 or re-open 11540?
Thanks for your help.
Thomas
From: kurt greaves
Sent: Montag, 13. August 2018 13:24
To: User
Subject: Re: Data Corruption due to multiple
flict with other services
>
> at org.apache.cassandra.net.MessagingService.
> getServerSockets(MessagingService.java:495) ~[apache-cassandra-2.1.18.jar:
> 2.1.18]
>
> …
>
>
>
> Until Cassandra stops:
>
>
>
stops:
...
INFO [StorageServiceShutdownHook] 2018-08-05 21:11:54,361 Gossiper.java:1454 -
Announcing shutdown
...
So, we have around 2 minutes where Cassandra is mangling with existing data,
although it shouldn't.
Sounds like a potential candidate for data corruption, right? E.g. later o
Little update.
I've managed to compute the token, and I can indeed SELECT the row from
CQLSH.
Interestingly enough, if I use CQLSH I do not get the exception (even if
the string is printed out).
I am now wondering whether, instead of a data corruption, the error is
related to the reading
Hi all,
apparently the year started with a node (version 3.0.15) exhibiting some
data corruption (discovered by a spark job enumerating all keys).
The exception is attached below.
The invalid string is a partition key, and it is supposed to be a file
name. If I manually decode the bytes I get
On Tue, Oct 20, 2015 at 3:31 PM, Robert Coli wrote:
> On Tue, Oct 20, 2015 at 9:13 AM, Branton Davis > wrote:
>
>>
>>> Just to clarify, I was thinking about a scenario/disaster where we lost
>> the entire cluster and had to rebuild from backups. I assumed we would
>> start each node with the ba
On Tue, Oct 20, 2015 at 9:13 AM, Branton Davis
wrote:
>
>> Just to clarify, I was thinking about a scenario/disaster where we lost
> the entire cluster and had to rebuild from backups. I assumed we would
> start each node with the backed up data and commit log directories already
> there and wit
On Mon, Oct 19, 2015 at 5:42 PM, Robert Coli wrote:
> On Mon, Oct 19, 2015 at 9:20 AM, Branton Davis > wrote:
>
>> Is that also true if you're standing up multiple nodes from backups that
>> already have data? Could you not stand up more than one at a time since
>> they already have the data?
>
quot;
Date: Monday, October 19, 2015 at 3:40 PM
To: "user@cassandra.apache.org"
Subject: Re: Would we have data corruption if we bootstrapped 10 nodes at once?
On Sun, Oct 18, 2015 at 8:10 PM, Kevin Burton wrote:
ouch.. OK.. I think I really shot myself in the foot here then. This
On Mon, Oct 19, 2015 at 9:20 AM, Branton Davis
wrote:
> Is that also true if you're standing up multiple nodes from backups that
> already have data? Could you not stand up more than one at a time since
> they already have the data?
>
An operator probably almost never wants to add multiple
not-
On Sun, Oct 18, 2015 at 8:10 PM, Kevin Burton wrote:
> ouch.. OK.. I think I really shot myself in the foot here then. This
> might be bad.
>
Yep.
https://issues.apache.org/jira/browse/CASSANDRA-7069 - "Prevent operator
mistakes due to simultaneous bootstrap"
But this doesn't handle your case
>>
>> From: on behalf of Kevin Burton
>> Reply-To: "user@cassandra.apache.org"
>> Date: Sunday, October 18, 2015 at 8:10 PM
>> To: "user@cassandra.apache.org"
>> Subject: Re: Would we have data corruption if we bootstrapped 10 nodes
>>
rg"
> Date: Sunday, October 18, 2015 at 8:10 PM
> To: "user@cassandra.apache.org"
> Subject: Re: Would we have data corruption if we bootstrapped 10 nodes at
> once?
>
> ouch.. OK.. I think I really shot myself in the foot here then. This
> might be bad.
>
esh` it into the new system.
>
>
>
> From: on behalf of Kevin Burton
> Reply-To: "user@cassandra.apache.org"
> Date: Sunday, October 18, 2015 at 8:10 PM
> To: "user@cassandra.apache.org"
> Subject: Re: Would we have data corruption if we bootstrapped 10 node
sstableloader or copy it to a new
host and `nodetool refresh` it into the new system.
From: on behalf of Kevin Burton
Reply-To: "user@cassandra.apache.org"
Date: Sunday, October 18, 2015 at 8:10 PM
To: "user@cassandra.apache.org"
Subject: Re: Would we have data corruption if we b
;
> From: on behalf of Kevin Burton
> Reply-To: "user@cassandra.apache.org"
> Date: Sunday, October 18, 2015 at 3:44 PM
> To: "user@cassandra.apache.org"
> Subject: Re: Would we have data corruption if we bootstrapped 10 nodes at
> once?
>
> An shit.. I thin
2015 at 3:44 PM
To: "user@cassandra.apache.org"
Subject: Re: Would we have data corruption if we bootstrapped 10 nodes at once?
An shit.. I think we're seeing corruption.. missing records :-/
On Sat, Oct 17, 2015 at 10:45 AM, Kevin Burton wrote:
We just migrated from a 30 node clu
An shit.. I think we're seeing corruption.. missing records :-/
On Sat, Oct 17, 2015 at 10:45 AM, Kevin Burton wrote:
> We just migrated from a 30 node cluster to a 45 node cluster. (so 15 new
> nodes)
>
> By default we have auto_boostrap = false
>
> so we just push our config to the cluster, th
We just migrated from a 30 node cluster to a 45 node cluster. (so 15 new
nodes)
By default we have auto_boostrap = false
so we just push our config to the cluster, the cassandra daemons restart,
and they're not cluster members and are the only nodes in the cluster.
Anyway. While I was about 1/2
If you are using more than one node, make sure you have set the Consistency
Level of the request to QUOURM.
Otherwise check your code for errors.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 22/06/2012, at 5:30 AM, Juan Ezquerro wrote
Hi
I'm using version 1.1.1. I'm sorry about the lack of information, i don't
have stacktraces, nothig crash, just return empty, i explain, i do login as
usual, find in db if there is a user named X and who belongs to client Y,
if that is true i return the hash of password from db and do my checks.
ibe what happened, but essentially one day I found that my
> column values that are supposed to be UTF-8 strings started getting bogus
> characters.
>
> Is there a known data corruption issue with 1.1 ?
>
>
I can't quite describe what happened, but essentially one day I found
that my column values that are supposed to be UTF-8 strings started
getting bogus characters.
Is there a known data corruption issue with 1.1 ?
Hi,
Unfortunately, this patch is already included in the build I have.
Thanks for the suggestion though!
Terje
On Sat, Mar 5, 2011 at 7:47 PM, Sylvain Lebresne wrote:
> Also, if you can, please be sure to try the new 0.7.3 release. We had a bug
> with the compaction of superColumns for instance
Also, if you can, please be sure to try the new 0.7.3 release. We had a bug
with the compaction of superColumns for instance that is fixed there (
https://issues.apache.org/jira/browse/CASSANDRA-2104). It also ships with a
new scrub command that tries to find if your sstables are corrupted and
repa
Hi Terje,
Can you attach the portion of your logs that shows the exceptions
indicating corruption? Which version are you on right now?
Ben
On 3/4/11 10:42 AM, Terje Marthinussen wrote:
We are seeing various other messages as well related to
deserialization, so this seems to be some random co
We are seeing various other messages as well related to deserialization, so
this seems to be some random corruption somewhere, but so far it may seem to
be limited to supercolumns.
Terje
On Sat, Mar 5, 2011 at 2:26 AM, Terje Marthinussen
wrote:
> Hi,
>
> Did you get anywhere on this problem?
>
>
Hi,
Did you get anywhere on this problem?
I am seeing similar errors unfortunately :(
I tried to add some quick error checking to the serialization, and it seems
like the data is ok there.
Some indication that this occurs in compaction and maybe in hinted handoff,
but no indication that it occu
Dan,
Do you have any more information on this issue? Have you been able to
discover anything from exporing your SSTables to JSON?
Thanks,
Ben
On 1/29/11 12:45 PM, Dan Hendry wrote:
I am once again having severe problems with my Cassandra cluster. This
time, I straight up cannot read sectio
I am once again having severe problems with my Cassandra cluster. This time,
I straight up cannot read sections of data (consistency level ONE). Client
side, I am seeing timeout exceptions. On the Cassandra node, I am seeing
errors as shown below. I don't understand what has happened or how to fix
In fact, on one node hard disk filled up so thats why we have to shift
cassandra manually on another machine. Can you please tell any work
around to restore data?
On Thu, 2010-08-19 at 09:56 -0500, Jonathan Ellis wrote:
> You're moving data around manually? That sounds like a good way to
> confu
You're moving data around manually? That sounds like a good way to
confuse Cassandra's replication.
On Thu, Aug 19, 2010 at 4:33 AM, Waqas Badar
wrote:
> We are observing a strange behavior of Cassandra. We have a ring of two
> nodes. When we inserts data in cassandra then old data after some en
We are observing a strange behavior of Cassandra. We have a ring of two
nodes. When we inserts data in cassandra then old data after some
entries get vanished. Please note that it is not a data loss, as when we
move that data to separate node then all data is shown. We are using
Cassandra 0.6.3 an
Thanks for the input.
My primary draw to Cassandra is dynamic schema. I could make it work
relationally, perhaps even nicely with something like postgres'
hstore, but I haven't investigated that fully yet. Relatively linear
scaling has it's appeal and competitive advantages too. I also find
We saw corruption pre 0.4 days. Digg hasn't seen corruption since that got
taken care of. We are only doing this for the "just in case the shit hits the
fan". Cassandra is rapidly changing and it would be completely careless of us
to forgo a path of using a new database as our primary datastore.
Recent messages to the list regarding durabilty and backup strategies
leads me to a few questions that other new users may also have.
What's the general experience with corruption to date?
Is it common?
Would I regret operating a single node cluster?
Digg referenced sending snapshots to hdfs
39 matches
Mail list logo