Anyone have any clue?
On Wed, Mar 9, 2022 at 7:01 PM MyWorld wrote:
> Hi all,
> Some problems with the display. Resending my query-
>
> I am modelling a table for a shopping site where we store products for
> customers and their data in json. Max prods for a customer is 10k.
>
> We initially de
Hi all,
Some problems with the display. Resending my query-
I am modelling a table for a shopping site where we store products for
customers and their data in json. Max prods for a customer is 10k.
We initially designed this table with the architecture below:
cust_prods(cust_id bigint PK, prod_id
Hi all,
I am modelling a table for a shopping site where we store products for
customers and their data in json. Max prods for a customer is 10k.
>>We initially designed this table with the architecture below:
cust_prods(cust_id bigint PK, prod_id bigint CK, prod_data text).
cust_id is partition
>
> *Thanks but there’s no DSE License.*
FWIW it was announced just before Christmas that both DSBulk (DataStax Bulk
Loader) and the DataStax Apache Kafka connector are now both freely
available to all developers and will work with open-source Apache
Cassandra. For details, see
https://www.datast
Another option instead of raw sstables is to use the Spark Migrator [1].
It reads a source cluster, can make some transformations (like
table/column naming) and
writes to a target cluster. It's a very convenient tool, OSS and free of charge.
[1] https://github.com/scylladb/scylla-migrator
On Fri,
>
>
> *In terms of speed, the sstableloader should be faster correct?Maybe the
> DSE BulkLoader finds application when you want a slice of the data and not
> the entire cake. Is it correct?*
There's no real direct comparison because DSBulk is designed for operating
on data in CSV or JSON as a rep
Hi everyone,
Is the DSE BulkLoader faster than the sstableloader?
Sometimes I need to make a cluster snapshot and replicate a Cluster A to a
Cluster B with fewer performance capabilities but the same data size.
In terms of speed, the sstableloader should be faster correct?
Maybe the DSE BulkLo
To: user@cassandra.apache.org
Subject: Re: [EXTERNAL] Re: *URGENT* Migration across different Cassandra
cluster few having same keyspace/table names
Hi Sean,
You got all valid points.
Please see my answers below -
1. Reason we want to move from 'A' to 'B' is to get rid
en more fragile
>afterwards. I would push back to see what is negotiable.
>
>
>
>
>
>
>
> Sean Durity – Staff Systems Engineer, Cassandra
>
>
>
> *From:* Ankit Gadhiya
> *Sent:* Friday, January 17, 2020 8:50 AM
> *To:* user@cassandra.apache.org
>
gt;
> From: Ankit Gadhiya
> Sent: Friday, January 17, 2020 8:50 AM
> To: user@cassandra.apache.org
> Subject: [EXTERNAL] Re: *URGENT* Migration across different Cassandra cluster
> few having same keyspace/table names
>
> Hi Upasana,
>
> Thanks for your response.
Subject: [EXTERNAL] Re: *URGENT* Migration across different Cassandra cluster
few having same keyspace/table names
Hi Upasana,
Thanks for your response. I’d love to do that as a first strategy but since
they are both separate clusters , how would I do that? Keyspaces already have
Hi Upasana,
Thanks for your response. I’d love to do that as a first strategy but since
they are both separate clusters , how would I do that? Keyspaces already
have networktopologystrategy with RF=3.
— Ankit
On Fri, Jan 17, 2020 at 8:45 AM Upasana Sharma <028upasana...@gmail.com>
wrote:
> Hi,
Hi,
Did you consider adding Cassandra nodes from cluster B, into cluster A as
a different data center ?
Your keyspace would than be on Network topology data strategy.
In this case, all data can be synced between both data centers by Cassandra
using rebalancing.
At client/application level you
Thanks but there’s no DSE License.
Wondering how sstableloader will help as some oh the Keyspace and tables
names are same. Also how do i sync few system keyspaces.
Thanks & Regards,
Ankit
On Fri, Jan 17, 2020 at 1:11 AM Vova Shelgunov wrote:
> Loader*
>
> https://www.datastax.com/blog/2018/05
Loader*
https://www.datastax.com/blog/2018/05/introducing-datastax-bulk-loader
On Fri, Jan 17, 2020, 09:09 Vova Shelgunov wrote:
> DataStax bulk loaded can be an option if data is large.
>
> On Fri, Jan 17, 2020, 07:33 Nitan Kainth wrote:
>
>> If the keyspace already exist, use copy command or
DataStax bulk loaded can be an option if data is large.
On Fri, Jan 17, 2020, 07:33 Nitan Kainth wrote:
> If the keyspace already exist, use copy command or sstableloader to merge
> data. If data volume it too big, consider spark or a custom java program
>
>
> Regards,
>
> Nitan
>
> Cell: 510 44
If the keyspace already exist, use copy command or sstableloader to merge data.
If data volume it too big, consider spark or a custom java program
Regards,
Nitan
Cell: 510 449 9629
> On Jan 16, 2020, at 10:26 PM, Ankit Gadhiya wrote:
>
>
> Any leads on this ?
>
> — Ankit
>
>> On Thu, Jan
Any leads on this ?
— Ankit
On Thu, Jan 16, 2020 at 8:51 PM Ankit Gadhiya
wrote:
> Hi Arvinder,
>
> Thanks for your response.
>
> Yes - Cluster B already has some data. Tables/KS names are identical ; for
> data - I still haven't got the clarity if it has identical data or no - I
> am assuming
Hi Arvinder,
Thanks for your response.
Yes - Cluster B already has some data. Tables/KS names are identical ; for
data - I still haven't got the clarity if it has identical data or no - I
am assuming no since it's for different customers but need the confirmation.
*Thanks & Regards,*
*Ankit Gadh
So as I understand, Cluster B already has some data and not an empty
cluster.
When you say, clusters share same keyspace and table names, do you mean
both clusters have identical data on those ks/tables?
-Arvi
On Thu, Jan 16, 2020, 5:27 PM Ankit Gadhiya wrote:
> Hello Group,
>
> I have a requ
Hello Group,
I have a requirement in one of the production systems where I need to be
able to migrate entire dataset from Cluster A (Azure Region A) to Cluster B
(Azure Region B).
Each cluster have 3 Cassandra nodes (RF=3) running used by different
applications. Few of the applications are common
aired, you'll have to run "nodetool join" to start serving reads.
Le mer. 29 août 2018 à 12:40, Vlad a écrit :
Will it help to set read_repair_chance to 1 (compaction is
SizeTieredCompactionStrategy)?
On Wednesday, August 29, 2018 1:34 PM, Vlad
wrote:
Hi,
quite urgent que
it :
Will it help to set read_repair_chance to 1 (compaction is
SizeTieredCompactionStrategy)?
On Wednesday, August 29, 2018 1:34 PM, Vlad
wrote:
Hi,
quite urgent questions:due to disk and C* start problem we were forced to
delete commit logs from one of nodes.
Now repair is running, b
d a écrit :
>
> Will it help to set read_repair_chance to 1 (compaction is
> SizeTieredCompactionStrategy)?
>
>
> On Wednesday, August 29, 2018 1:34 PM, Vlad
> wrote:
>
>
> Hi,
>
> quite urgent questions:
> due to disk and C* start problem we were forced to delete
it :
Will it help to set read_repair_chance to 1 (compaction is
SizeTieredCompactionStrategy)?
On Wednesday, August 29, 2018 1:34 PM, Vlad
wrote:
Hi,
quite urgent questions:due to disk and C* start problem we were forced to
delete commit logs from one of nodes.
Now repair is runn
etool join" to start serving reads.
Le mer. 29 août 2018 à 12:40, Vlad a écrit :
Will it help to set read_repair_chance to 1 (compaction is
SizeTieredCompactionStrategy)?
On Wednesday, August 29, 2018 1:34 PM, Vlad
wrote:
Hi,
quite urgent questions:due to disk and C* start proble
. 29 août 2018 à 12:40, Vlad a écrit :
>
>> Will it help to set read_repair_chance to 1 (compaction is
>> SizeTieredCompactionStrategy)?
>>
>>
>> On Wednesday, August 29, 2018 1:34 PM, Vlad
>> wrote:
>>
>>
>> Hi,
>>
>> quite urgent
s.
Le mer. 29 août 2018 à 12:40, Vlad a écrit :
> Will it help to set read_repair_chance to 1 (compaction is
> SizeTieredCompactionStrategy)?
>
>
> On Wednesday, August 29, 2018 1:34 PM, Vlad
> wrote:
>
>
> Hi,
>
> quite urgent questions:
> due to disk and C*
Will it help to set read_repair_chance to 1 (compaction is
SizeTieredCompactionStrategy)?
On Wednesday, August 29, 2018 1:34 PM, Vlad
wrote:
Hi,
quite urgent questions:due to disk and C* start problem we were forced to
delete commit logs from one of nodes.
Now repair is running, but
Hi,
quite urgent questions:due to disk and C* start problem we were forced to
delete commit logs from one of nodes.
Now repair is running, but meanwhile some reads bring no data (RF=2)
Can this node be excluded from reads queries? And that all reads will be
redirected to other node in the ring
Thank You All for your hints on this.
I added another folder on the commitlog Disk to compensate immediate urgency.
Next Step will be to reorganize and deduplicate the data into a 2nd table.
Then drop the original one, clean the snapshot, consolidate back all data Files
away from the commitlog
Agreed that you tend to add capacity to nodes or add nodes once you know you
have no unneeded data in the cluster.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Wednesday, April 04, 2018 9:10 AM
To: user cassandra.apache.org
Subject: Re: Urgent Problem - Disk full
Hi,
When
Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
> Sent: Wednesday, April 04, 2018 7:28 AM
> To: user@cassandra.apache.org
> Subject: RE: Urgent Problem - Disk full
>
> Jeff,
>
> Just wondering: why wouldn't the answer be to:
> 1. move anything you want to arc
There's also the old snapshots to remove that could be a significant amount of
memory.
-Original Message-
From: Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
Sent: Wednesday, April 04, 2018 7:28 AM
To: user@cassandra.apache.org
Subject: RE: Urgent Problem - Disk full
Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Wednesday, April 04, 2018 7:10 AM
To: user@cassandra.apache.org
Subject: Re: Urgent Problem - Disk full
Yes, this works in TWCS.
Note though that if you have tombstone compaction subproperties set, there may
be sstables with newer filesystem timestamps
ote:
>
> Nothing a full repair won’t be able to fix.
>
>> On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer
>> , wrote:
>> Hi,
>>
>> I have an urgent Problem. - I will run out of disk space in near future.
>> Largest Table is a Time-Series Table
expanded the cluster over time and you see an imbalance of disk
usage on the oldest hosts, “nodetool cleanup” will likely free up some of that
data
--
Jeff Jirsa
> On Apr 4, 2018, at 4:32 AM, Jürgen Albersdorfer
> wrote:
>
> Hi,
>
> I have an urgent Problem. - I will run
, April 04, 2018 4:38 AM
To: user@cassandra.apache.org; user@cassandra.apache.org
Subject: Re: Urgent Problem - Disk full
Nothing a full repair won’t be able to fix.
On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer
, wrote:
Hi,
I have an urgent Problem. - I will run out of disk space in
Nothing a full repair won’t be able to fix.
On Apr 4, 2018, 7:32 AM -0400, Jürgen Albersdorfer
, wrote:
> Hi,
>
> I have an urgent Problem. - I will run out of disk space in near future.
> Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS)
> and defau
Hi,
I have an urgent Problem. - I will run out of disk space in near future.
Largest Table is a Time-Series Table with TimeWindowCompactionStrategy (TWCS)
and default_time_to_live = 0
Keyspace Replication Factor RF=3. I run C* Version 3.11.2
We have grown the Cluster over time, so SSTable files
Just to clarify that behaviour. QUORUM only applies to the default
superuser, subsequent superusers you create later on are still only queried
at LOCAL_ONE. E.g.
protected static ConsistencyLevel consistencyForRole(String role)
{
if (role.equals(DEFAULT_SUPERUSER_NAME))
return Consiste
More explicitly - if you have 60 nodes, setting rf=60 will likely make it very
difficult for you to log in as a superuser.
--
Jeff Jirsa
> On Sep 6, 2017, at 11:40 AM, Jon Haddad wrote:
>
> I wouldn’t worry about being meticulous about keeping RF = N as the cluster
> grows. If you had 60
I wouldn’t worry about being meticulous about keeping RF = N as the cluster
grows. If you had 60 nodes and your auth data was only on 9 you’d be
completely fine.
> On Sep 6, 2017, at 11:36 AM, Cogumelos Maravilha
> wrote:
>
> After insert a new node we should:
>
> ALTER KEYSPACE system_au
After insert a new node we should:
ALTER KEYSPACE system_auth WITH REPLICATION = { 'class' : ...
'replication_factor' : x };
x = number of nodes in dc
The default user and password should work:
-u cassandra -p cassandra
Cheers.
On 23-08-2017 11:14, kurt greaves wrote:
> The cassandra user requ
Common trap. It's an unfortunate default that is not so easy to change.
Community!!
From: kurt greaves [mailto:k...@instaclustr.com]
Sent: 23 August 2017 11:14
To: User
Subject: Re: C* 3 node issue -Urgent
The cassandra user requires QUORUM consistency to be achieved for
authentication. Normal users only require ONE. I suspect your system_auth
keyspace has an RF of
The cassandra user requires QUORUM consistency to be achieved for
authentication. Normal users only require ONE. I suspect your system_auth
keyspace has an RF of 1, and the node that owns the cassandra users data is
down.
Steps to recover:
1. Turn off authentication on all the nodes
2. Restart the
o I get the other 2 nodes back up?
>
> From: Akhil Mehra [mailto:akhilme...@gmail.com <mailto:akhilme...@gmail.com>]
> Sent: 23 August 2017 10:05
> To: user@cassandra.apache.org <mailto:user@cassandra.apache.org>
> Subject: Re: C* 3 node issue -Urgent
>
> The
t; CQLSH to change this? Or better, how do I get the other 2 nodes back up?
>
> From: Akhil Mehra [mailto:akhilme...@gmail.com <mailto:akhilme...@gmail.com>]
> Sent: 23 August 2017 10:05
> To: user@cassandra.apache.org <mailto:user@cassandra.apache.org>
> Subject: Re:
August 2017 10:05
To: user@cassandra.apache.org
Subject: Re: C* 3 node issue -Urgent
The cqlsh image say bad credentials. Just confirming that you have the correct
username/password when logging on.
By turing on authentication I am assuming you mean using the
PasswordAuthenticator instead of the
Error from
server: code=0100 [Bad credentials] message="Username and/or password are
incorrect"',)})
Yes you are correct I do have PasswordAuthenticator turned on .
From: Akhil Mehra [mailto:akhilme...@gmail.com]
Sent: 23 August 2017 10:05
To: user@cassandra.apache.org
Subject
org>'
> Cc: Stewart Allman
> Subject: C* 3 node issue -Urgent
>
> Hi Everyone.
>
> I need the communities help here.
>
> I have attempted this morning to turn on JMX authentication for Nodetool.
> I’ve gone into the Cassand
I will also mention I am on:
C* 3.0.11
Linux Oracle red hat 7.1
Java 1.8.0.31
Python 2.7
From: Jonathan Baynes
Sent: 23 August 2017 09:47
To: 'user@cassandra.apache.org'
Cc: Stewart Allman
Subject: C* 3 node issue -Urgent
Hi Everyone.
I need the communities help here.
I have atte
Hi Everyone.
I need the communities help here.
I have attempted this morning to turn on JMX authentication for Nodetool. I've
gone into the Cassandra-env.sh file and updated the following:
LOCAL_JMX=No
JVM_OPTS="$JVM_OPTS -Dcom.sun.management.jmxremote.authenticate=true"
JVM_OPTS="$JVM_OPTS
-
One of the column names on the row with key 353339332d3134363533393931
failed to validate with the validator for the column.
If you really are after what column is problematic, and are able to
build and run cassandra, you can add debugging info to Column.java
protected void validateName(C
I was able to figure out that 353339332d3134363533393931 is the row key
while no idea what is 92668395684826132216160944211592988451 part?
sstable2json also fails with validation error on this row key
now since I have lost data for this row - how do I find out that was
the root cause?
thank
Ok i've run scrub on the 3 nodes and the problematic row
Error validating row
DecoratedKey(92668395684826132216160944211592988451,
353339332d3134363533393931)
The full message is
WARN [CompactionExecutor:2700] 2012-06-14 14:26:42,041
CompactionManager.java (line 582) Non-fatal error reading
> Is there way to make cassandra throw away the offending column?
Running scrub should allow to get read of the row containing the
problematic column. Unfortunately it will discard the whole row, not
just the column.
However, since scrub takes a snapshot anyway (and should tell you
which sstable h
Hi again,
After some further investigation now I'm in a situation that there are
3 nodes (of 6 nodes cluster) and all of them are falling with
ValidationExecutor during compaction which is is trigger by
"repair -pr PRODUCTION UserCompletions" against any node in the cluster
- repair get's st
On Thu, Jun 14, 2012 at 12:00 PM, Piavlo wrote:
> What's the procedure to check if the compressed sstable is corrupted or not?
Since you use compression, in theory that can't be disk bitrot since
in that case
you would have got some checksum error instead. The fact that it happened
on 3 nodes dur
Hi Sylvain,
Yes this UserCompletions CF uses composite comparator and I do use
sstable compression.
What's the procedure to check if the compressed sstable is corrupted or not?
If it's corrupted what can I do to fix the issue with minimal cluster
load impact?
Is there way to delete all User
On Thu, Jun 14, 2012 at 8:26 AM, Piavlo wrote:
> I started looking for similar messages on other nodes saw a SINGLE
> IllegalArgumentException on
> ValidationExecutor on the same node and 2 other nodes (this is a 6 node
> cluster) which happened
> at almost the same time , in all nodes while flu
Hi,
I have a pretty urgent issue with 1.0.9 cluster
in opscenter i saw a compation that had a progress of 0% for a long
time, looking at the cassandra log on the relevant node I see REPEATED
messages of IllegalArgumentException in CompactionExecutor
INFO [CompactionExecutor:3335] 2012-06
Thanks for the help, this seems to have worked. Except that while adding the
new node we added the same token to a different IP (operational script
goofup) and brought the node up, so now the other nodes just had the message
that a new IP had taken over the token.
- So we brought it down and f
ok I will go with the IP change strategy and keep you posted. Not going to
manually copy any data, just bring up the node and let it bootstrap.
Thanks
On Fri, Aug 19, 2011 at 11:46 AM, Peter Schuller <
peter.schul...@infidyne.com> wrote:
> > (Yes, this should definitely be easier. Maybe the most
> (Yes, this should definitely be easier. Maybe the most generally
> useful fix would be for Cassandra to support a node joining the wring
> in "write-only" mode. This would be useful in other cases, such as
> when you're trying to temporarily off-load a node by dissabling
> gossip).
I knew I had
> From what I understand, Peter's recommendation should work for you. They
> have both worked for me. No need to copy anything by hand on the new node.
> Bootstrap/repair does that for you. From the Wiki:
Right - it's just that the complication comes from the fact that he's
using the same machine,
> I am running read/write at quorum. At this point I have turned off my
> clients from talking to this node. So if that is the case I can potentially
> just nodetool repair (without changing IP). But would it be better if I
No, other nodes in the cluster will still be sending reads to the node.
>
Hi -
From what I understand, Peter's recommendation should work for you. They
have both worked for me. No need to copy anything by hand on the new node.
Bootstrap/repair does that for you. From the Wiki:
If a node goes down entirely, then you have two options:
(Recommended approach) Bring
Let me be specific on lost data -> lost a replica , the other 2 nodes have
replicas
I am running read/write at quorum. At this point I have turned off my
clients from talking to this node. So if that is the case I can potentially
just nodetool repair (without changing IP). But would it be better
> ok, so we just lost the data on that node. are building the raid on it, but
> once it is up what is the best way to bring it back in the cluster
You're saying the raid failed and data is gone?
> just let it come up and run nodetool repair
> copy data from another node and then run nodetool repa
ok, so we just lost the data on that node. are building the raid on it, but
once it is up what is the best way to bring it back in the cluster
- just let it come up and run nodetool repair
- copy data from another node and then run nodetool repair,
- do I still need to run repair imme
have them all within a " " and not multiple " ", " "
for example:
seeds: "192.168.1.115, 192.168.1.110"
versus what you have...
On Fri, Jun 17, 2011 at 7:00 PM, Anurag Gujral wrote:
> Hi All
> I specified multiple hosts in seeds field when using cassandra-0.8
> like this
> seeds: "1
Hi All
I specified multiple hosts in seeds field when using cassandra-0.8
like this
seeds: "192.168.1.115","192.168.1.110","192.168.1.113"
But I am getting error that
hile parsing a block mapping
in "", line 106, column 13:
- seeds: "192.168.1.115","192.168. ...
Using a single directory will make be the most efficient use of space, multi
directories are useful when you accidentally run out of space
http://www.mail-archive.com/user@cassandra.apache.org/msg07874.html
Can you put the SSD's in a stripe set ?
Also this may be of interest
http://www.bitplu
How did you solve it?
On Sun, Apr 3, 2011 at 7:32 PM, Anurag Gujral wrote:
> Now it is using all the three disks . I want to understand why recommended
> approach is to use
> one single large volume /directory and not multiple ones,can you please
> explain in detail.
> I am using SSDs using thre
Now it is using all the three disks . I want to understand why recommended
approach is to use
one single large volume /directory and not multiple ones,can you please
explain in detail.
I am using SSDs using three small ones is cheaper than using one large one.
Please Suggest
Thanks
Anurag
On Sun,
Is this still a problem ? Are you getting errors on the server ?
It should be choosing the directory with the most space.
btw, the recommended approach is to use a single large volume/directory for the
data.
Aaron
On 2 Apr 2011, at 01:56, Anurag Gujral wrote:
> Hi All,
> I have s
Hi All,
I have setup a cassandra cluster with three data directories but
cassandra is using only one of them and that disk is out of space
and .Why is cassandra not using all the three data directories.
Plz Suggest.
Thanks
Anurag
> What happened is this:
> You started your cluster with only one node, so at first, all data was on
> this.
> Then you added a second node. Cassandra then moved (approximatively)
> half of the data to the second node. In theory, at that
> point the data that was moved to the second node could be
ect?
>
> -Original Message-
> From: Sylvain Lebresne [mailto:sylv...@datastax.com]
> Sent: Friday, March 25, 2011 3:01 AM
> To: user@cassandra.apache.org
> Cc: Jared Laprise
> Subject: Re: URGENT HELP PLEASE!
>
> On Fri, Mar 25, 2011 at 1:49 AM, Jared Lap
atastax.com]
> Sent: Friday, March 25, 2011 3:01 AM
> To: user@cassandra.apache.org
> Cc: Jared Laprise
> Subject: Re: URGENT HELP PLEASE!
>
> On Fri, Mar 25, 2011 at 1:49 AM, Jared Laprise wrote:
>> Hello all, I'm running 2 Cassandra 6.5 nodes and I brought down the
ain Lebresne [mailto:sylv...@datastax.com]
Sent: Friday, March 25, 2011 3:01 AM
To: user@cassandra.apache.org
Cc: Jared Laprise
Subject: Re: URGENT HELP PLEASE!
On Fri, Mar 25, 2011 at 1:49 AM, Jared Laprise wrote:
> Hello all, I'm running 2 Cassandra 6.5 nodes and I brought down the
&
On Fri, Mar 25, 2011 at 1:49 AM, Jared Laprise wrote:
> Hello all, I’m running 2 Cassandra 6.5 nodes and I brought down the
> secondary node and restarted the primary node. After Cassandra came back up
> all data has been reverted to several months ago.
Out of curiosity, when you said 'brought do
> Based on what you're saying, and being I'm using session (cookie) based load
> balancing it would be true that data is rarely read or written (per user) on
> a different server, that could be why data isn't replicating.
You've probably discovered this already but just in case, and for
others f
On Thu, Mar 24, 2011 at 11:58 PM, Jared Laprise wrote:
> My replication factor is 1
>
Then you are living dangerously.
> I haven't run repair until today, I'm using ONE for consistency level.
>
Repair at rf=1 won't do anything.
I have two servers that are load balanced (per session) which bo
" moment, and didn't have the time to Google it myself :-)
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Thursday, March 24, 2011 9:49 PM
To: user@cassandra.apache.org
Cc: Jared Laprise; aaron morton
Subject: Re: URGENT HELP PLEASE!
Each row is replicat
of Cassandra and especially related to multi-node
deployments. Thanks!
-Original Message-
From: Benjamin Coverston [mailto:ben.covers...@datastax.com]
Sent: Thursday, March 24, 2011 8:59 PM
To: user@cassandra.apache.org
Subject: Re: URGENT HELP PLEASE!
Hi Jared,
Sounds like you have two
ginal Message-
> From: Jonathan Ellis [mailto:jbel...@gmail.com]
> Sent: Thursday, March 24, 2011 8:22 PM
> To: user@cassandra.apache.org
> Cc: aaron morton
> Subject: Re: URGENT HELP PLEASE!
>
> Right, Cassandra doesn't keep old versions around so to see an old version
l.com]
Sent: Thursday, March 24, 2011 8:22 PM
To: user@cassandra.apache.org
Cc: aaron morton
Subject: Re: URGENT HELP PLEASE!
Right, Cassandra doesn't keep old versions around so to see an old version you
have to have uncompacted data and whack the new data -- either by blowing away
sstab
nment.
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Thursday, March 24, 2011 8:22 PM
To: user@cassandra.apache.org
Cc: aaron morton
Subject: Re: URGENT HELP PLEASE!
Right, Cassandra doesn't keep old versions around so to see an old version you
have to have uncompacte
Right, Cassandra doesn't keep old versions around so to see an old
version you have to have uncompacted data and whack the new data --
either by blowing away sstables or not replaying the commitlog.
Snapshots flush before creating their hard links, which rules out any
commitlog problems.
If you r
Was there anything in the server logs during startup ?
I've not heard of this happening before and it's hard think of how / why
cassandra could revert it's data. Other than something external playing with
the files on disk
Aaron
On 25 Mar 2011, at 13:49, Jared Laprise wrote:
> Hello all, I’m
Hello all, I'm running 2 Cassandra 6.5 nodes and I brought down the secondary
node and restarted the primary node. After Cassandra came back up all data has
been reverted to several months ago.
I could really use some incite here, this is a production website and I need to
act quickly. I have a
94 matches
Mail list logo