You overwrite your columns by writing new row/supercolumn.
Remove new row/supercolumn from "for" statement, which is for columns:
int rowKey = 10;
int superColumnKey = 20;
usersWriter.newRow(ByteBufferUtil.bytes(rowKey));
usersWriter.newSuperColumn(ByteBufferUtil.bytes(superColumnKey));
for (int
Thanks for the links. I wanted to avoid a major compaction somehow.
I see many JIRA issues on timestamps related to compaction/reads. So many
improvements have been proposed.
--
Ravi
On Thu, Oct 10, 2013 at 12:26 AM, Shahab Yunus wrote:
> Ahh, yes, 'compaction'. I blanked out while mentioning
I did some test on this issue, and it turns out the problem caused by local
time stamp.
In our traffic, the update and delete happened very fast, within 1 seconds,
even within 100ms.
And at that time, the ntp service seems not work well, the offset is same
times even larger then 1 second.
Then the
I have not been able to do the test with the 2nd cluster, but have been
given a disturbing data point. We had a disk slowly fail causing a
significant performance degradation that was only resolved when the
"sick" node was killed.
* Perf in DC w/ sick disk: http://i.imgur.com/W1I5ymL.png?1
*
> From: johnlu...@hotmail.com
> To: user@cassandra.apache.org
> Subject: RE: cassandra hadoop reducer writing to CQL3 - primary key - must it
> be text type?
> Date: Wed, 9 Oct 2013 09:40:06 -0400
>
> software versions : apache-cassandra-2.0.1hadoop
Ahh, yes, 'compaction'. I blanked out while mentioning repair and cleanup.
That is in fact what needs to be done first and what I meant. Thanks
Robert.
Regards,
Shahab
On Wed, Oct 9, 2013 at 1:50 PM, Robert Coli wrote:
> On Wed, Oct 9, 2013 at 7:35 AM, Ravikumar Govindarajan <
> ravikumar.govi
On Wed, Oct 9, 2013 at 7:35 AM, Ravikumar Govindarajan <
ravikumar.govindara...@gmail.com> wrote:
> What is the quick way to delete old-data and at the same time make sure
> read [doesn't] churn through all deleted columns?
>
Use a database that isn't log structured?
But seriously, in 2.0 there'
Read the paper "Building on Quicksand" especially the section where he
describes what they do at AmazonŠthe apology modelŠie. Allow overbooking
and apologize but limit overbookingŠ.That is one way to go and stay
scalable.
You may want to analyze the percentage change that overbooking can be as
we
Thanks.
Date: Sun, 6 Oct 2013 11:48:42 -0400
Subject: Re: Facebook Cassandra
From: edlinuxg...@gmail.com
To: user@cassandra.apache.org
As it relates to c*
http://www.cs.cornell.edu/Projects/ladis2009/papers/Lakshman-ladis2009.PDF
We have built, implemented, and operated a storage sys-tem pro
I might be missing something obvious here but can't you afford (time-wise)
to run cleanup or repair after the deletion so that the deleted data is
gone? Assuming that your columns are time-based data?
Regards,
Shahab
On Wed, Oct 9, 2013 at 10:35 AM, Ravikumar Govindarajan <
ravikumar.govindara.
Hi all,
for an online shop owned by my company I would like to remove MySQL for
everything concerning the frontend and use Cassandra instead.
The site has more than a million visit each day but what we need to know is
Products (deals) are divided for cities
Each product can stay online for X time
Please read
http://mail-archives.apache.org/mod_mbox/cassandra-user/
Sent from my iPhone
> On Oct 9, 2013, at 9:42 AM, Leonid Ilyevsky wrote:
>
> Unsubscribe
>
> This email, along with any attachments, is confidential and may be legally
> privileged or otherwise protected from disclosure. An
We have wide-rows accumulated in a cassandra CF and now changed our
app-side logic.
The application now only wants first 7 days of data from this CF.
What is the quick way to delete old-data and at the same time make sure
read does churn through all deleted columns?
Lets say I do the following
Unsubscribe
This email, along with any attachments, is confidential and may be legally
privileged or otherwise protected from disclosure. Any unauthorized
dissemination, copying or use of the contents of this email is strictly
prohibited and may be in violation
I don't know what happened to my original post but it got truncated.
Let me try again :
software versions : apache-cassandra-2.0.1 hadoop-2.1.0-beta
I have been experimenting with using hadoop for a map/reduce operation on
cassandra,
outputting to the CqlOutputFormat.class.
I based my fi
Hi all. I have a question. how can we use Mapreduce Chaining Jobs with
cassandra columnFamily input?
I use mapreduce chaining job ang it can not fined my input column family
for job1..?
Configuration conf1 = new Configuration();
Configuration conf2 = new Configuration();
Job job1
Hi Rob,
thanks for your insight on the matter. We will put together a benchmark
and hopefully get back with some meaningful results. We have a pair of
identical servers and intend to run YCSB workloads on default datastax
Cassandra community edition installations on Ubuntu & Windows. Any
suggestio
The suggested fix to run a major compaction on the index column family
unfortunately didn't help. Though, rebuilding the index (nodetool
rebuild_index) fixed it.
This bug appears to be almost the same as
https://issues.apache.org/jira/browse/CASSANDRA-5732, (and some of the
related bugs mentioned
18 matches
Mail list logo