[VOTE] Release Apache Cassandra 1.0.9

2012-04-02 Thread Sylvain Lebresne
1.0.8 has been release more than a month ago, we made quite a few bug fixes and don't have any major outstanding issue open. I thus propose the following artifacts for release as 1.0.9. sha1: 4457839b9da623d9d4a090fa444614c35d39bb4c Git: http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=sh

digest query: why relying on value?

2012-04-02 Thread Nicolas Romanetti
Hello, Why does the digest read response include a hash of the column value? Isn't the timestamp sufficient? May be an answer: Is the value hash computed to cope with (I presume rare) race condition scenario where 2 nodes would end up with same col. name and same col. timestamp but with a diffe

Re: digest query: why relying on value?

2012-04-02 Thread Jonathan Ellis
Look at Column.reconcile. On Mon, Apr 2, 2012 at 9:17 AM, Nicolas Romanetti wrote: >  Hello, > > Why does the digest read response include a hash of the column value? Isn't > the timestamp sufficient? > > May be an answer: > Is the value hash computed to cope with (I presume rare) race condition

Re: [VOTE] Release Apache Cassandra 1.0.9

2012-04-02 Thread Jonathan Ellis
+1 On Mon, Apr 2, 2012 at 8:33 AM, Sylvain Lebresne wrote: > 1.0.8 has been release more than a month ago, we made quite a few bug fixes > and don't have any major outstanding issue open. I thus propose the following > artifacts for release as 1.0.9. > > sha1: 4457839b9da623d9d4a090fa444614c35d39

Re: [VOTE] Release Apache Cassandra 1.0.9

2012-04-02 Thread Pavel Yaskevich
+1 -- Pavel Yaskevich On Monday 2 April 2012 at 17:25, Jonathan Ellis wrote: > +1 > > On Mon, Apr 2, 2012 at 8:33 AM, Sylvain Lebresne (mailto:sylv...@datastax.com)> wrote: > > 1.0.8 has been release more than a month ago, we made quite a few bug fixes > > and don't have any major outstandi

Re: digest query: why relying on value?

2012-04-02 Thread Sylvain Lebresne
A digest query is about making 1 digests for many columns, not 1 digest per column. If it were 1 digest per column, then yes, the timestamp would be an option. -- Sylvain On Mon, Apr 2, 2012 at 4:25 PM, Jonathan Ellis wrote: > Look at Column.reconcile. > > On Mon, Apr 2, 2012 at 9:17 AM, Nicolas

Re: digest query: why relying on value?

2012-04-02 Thread Nicolas Romanetti
Right on spot thanks! It would be interesting to have some metrics on how rare is the case: // break ties by comparing values. if (timestamp() == column.timestamp()) return value().compareTo(column.value()) < 0 ? column : this; If extremely rare, it would be may be mo

Re: ranges

2012-04-02 Thread Jonathan Ellis
Just List for the most part. If there are exactly two, maybe Pair. On Mon, Apr 2, 2012 at 6:30 PM, Mark Dewey wrote: > Is there an object that is standard for specifying a compound range? (eg > [W, X] + [Y, Z]) > > Mark -- Jonathan Ellis Project Chair, Apache Cassandra co-founder of DataStax

kudos...

2012-04-02 Thread Brian O'Neill
I just wanted to let you guys know that I gave you a shout out... http://brianoneill.blogspot.com/2012/04/cassandra-vs-couchdb-mongodb-riak-hbase.html thanks for all the support, brian -- Brian ONeill Lead Architect, Health Market Science (http://healthmarketscience.com) mobile:215.588.6024 blog

implementation choice with regard to multiple range slice query filters

2012-04-02 Thread David Alves
Hi guys I'm a PhD student and I'm trying to dip my feet in the water wrt to cassandra development, as I'm a long time fan. I'm implementing CASSANDRA-3885 which pertains to supporting returning multiple slices of a row. After looking around at the portion of the

Re: kudos...

2012-04-02 Thread Jonathan Ellis
Good post. Thanks, Brian! On Mon, Apr 2, 2012 at 11:04 PM, Brian O'Neill wrote: > I just wanted to let you guys know that I gave you a shout out... > http://brianoneill.blogspot.com/2012/04/cassandra-vs-couchdb-mongodb-riak-hbase.html > > thanks for all the support, > brian > > -- > Brian ONeill

Re: implementation choice with regard to multiple range slice query filters

2012-04-02 Thread Jonathan Ellis
That would work, but I think the best approach would actually push multiple ranges down into ISR itself, otherwise you could waste a lot of time reading the row header redundantly (the skipBloomFilter/deserializeIndex part). The tricky part would be getting IndexedBlockFetcher to not do extra work