Thanks Mat!
I thought you were going to expose the internals of CQL3 features like
(wide rows with) complex keys and collections to CQL2 clients (which is
something that should generally be possible, if Datastax' blog posts are
accurate, i.e. an actual description of how things were implemented an
Hi!
I had the same problem (over counting due to replay of commit log, which
ignored drain) after upgrading my cluster from 1.0.9 to 1.0.11.
I updated the Cassandra tickets mentioned in this thread.
Regards,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-m
On Tue, Nov 20, 2012 at 2:03 AM, Alain RODRIGUEZ wrote:
]> Thanks for the work around, setting disk_access_mode: standard worked.
Do you have working JNA, for reference?
=Rob
--
=Robert Coli
AIM>ALK - rc...@palominodb.com
YAHOO - rcoli.palominob
SKYPE - rcoli_palominodb
>> upgradetables re-writes every sstable to have the same contents in the
>> newest format.
Agree.
In the world of compaction, and excluding upgrades, have older sstables is
expected.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thel
> Thanks for the work around, setting disk_access_mode: standard worked.
hmmm, it's only a work around.
If you can reproduce the fault could you report it on
https://issues.apache.org/jira/browse/CASSANDRA
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronm
> INFO [OptionalTasks:1] 2012-11-19 13:08:58,868 ColumnFamilyStore.java (line
> 451) completed loading (5175655 ms; 13259976 keys) row cache
So it was reading 2,562 rows per second during startup. I'd say that's not
unreasonable performance for 13 million rows. It will get faster in 1.2, but
for
Thanks a lot Aaron and Edward.
The mail thread clarifies some things for me.
For letting others know on this thread, running an upgradesstables did
decrease our bloom filter false positive ratios a lot. ( upgradesstables
was run not to upgrade from a casasndra version to a higher cassandra
versio
On Tue, Nov 20, 2012 at 5:23 PM, aaron morton wrote:
> My understanding of the compaction process was that since data files keep
> continuously merging we should not have data files with very old last
> modified timestamps
>
> It is perfectly OK to have very old SSTables.
>
> But performing an upg
> My understanding of the compaction process was that since data files keep
> continuously merging we should not have data files with very old last
> modified timestamps
It is perfectly OK to have very old SSTables.
> But performing an upgradesstables did decrease the number of data files and
On Tue, Nov 20, 2012 at 2:49 PM, Rob Coli wrote:
> On Mon, Nov 19, 2012 at 7:18 PM, Mike Heffner wrote:
> > We performed a 1.1.3 -> 1.1.6 upgrade and found that all the logs
> replayed
> > regardless of the drain.
>
> Your experience and desire for different (expected) behavior is welcomed
> on
On Mon, Nov 19, 2012 at 7:18 PM, Mike Heffner wrote:
> We performed a 1.1.3 -> 1.1.6 upgrade and found that all the logs replayed
> regardless of the drain.
Your experience and desire for different (expected) behavior is welcomed on :
https://issues.apache.org/jira/browse/CASSANDRA-4446
"nodeto
Great!
2012/11/20 michael.figui...@gmail.com
> The Apache Cassandra project has traditionally not focused on client side.
> Rather than modifying the scope of the project and jeopardizing the current
> driver ecosystem we've preferred to open source it this way. Not that this
> driver's license
Alain,
My understanding is that drain ensures that all memtables are flushed, so
that there is no data in the commitlog that is isn't in an sstable. A
marker is saved that indicates the commit logs should not be replayed.
Commitlogs are only removed from disk periodically
(after commitlog_total_sp
Hi Timmy,
I haven't done a lot of playing with CQL3 yet, mostly just reading the
blog posts, so the following is subject to change : )
Right now, the Cequel model layer has a skinny row model (which is
designed to follow common patterns of Ruby ORMs) and a wide row model
(which is designed to beh
@Mat Brown:
> (while still retaining compatibility with CQL2 structures).
Do you mean by exceeding what Cassandra itself provides in terms of CQL2/3
interoperability?
I'm looking into something similar currently (however in Java not in Ruby)
and would be interested in your experiences, if you fo
@Mat
Well I guess you could add your Ruby client to this list since there is not
a lot of them yet.
http://wiki.apache.org/cassandra/ClientOptions
Alain
2012/11/20 Mat Brown
> As the author of Cequel, I can assure you it is excellent ; )
>
> We use it in production at Brewster and it is quit
As the author of Cequel, I can assure you it is excellent ; )
We use it in production at Brewster and it is quite stable. If you try
it out and find any bugs, we'll fix 'em quickly.
I'm planning a big overhaul of the model layer over the holidays to
expose all the
new data modeling goodness in C
@Mike
I am glad to see I am not the only one with this issue (even if I am sorry
it happened to you of course.).
Isn't drain supposed to clear the commit logs ? Did removing them worked
properly ?
I his warning to C* users, Jonathan Ellis told that a drain would avoid
this issue, It seems like i
Hi Aaron.
Here is my java -version
java version "1.6.0_35"
Java(TM) SE Runtime Environment (build 1.6.0_35-b10)
Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01, mixed mode)
Thanks for the work around, setting disk_access_mode: standard worked.
Alain
2012/11/19 aaron morton
> Are you runn
19 matches
Mail list logo