Re: thrift version

2010-11-09 Thread Todd Blose
The entire svn revision history was preserved with the change, so doing svn co -r917130 http://svn.apache.org/repos/asf/thrift/trunk/ would suffice. Todd On Tue, Nov 9, 2010 at 1:20 PM, Gary Dusbabek wrote: > No, that's the one. Thrift has graduated since then, which means > their svn url ch

Re: thrift version

2010-11-09 Thread Gary Dusbabek
No, that's the one. Thrift has graduated since then, which means their svn url changed from http://svn.apache.org/repos/asf/incubator/thrift/ to http://svn.apache.org/repos/asf/thrift/. I suspect that is making things hard. Perhaps an svn guru on the list could explain how to do the checkout. G

Re: thrift version

2010-11-09 Thread Liangzhao Zeng
Actually I can not find 917130 in *http://svn.apache.org/repos/asf/thrift/, is there another svn rev?* * * * * ***Cheers,* * * *Liangzhao * On Tue, Nov 9, 2010 at 2:59 PM, Gary Dusbabek wrote: > svn rev 917130 > > On Tue, Nov 9, 2010 at 13:48, Liangzh

Re: thrift version

2010-11-09 Thread Gary Dusbabek
svn rev 917130 On Tue, Nov 9, 2010 at 13:48, Liangzhao Zeng wrote: > Can anyone tell me what's thrift version using in Cassandra 0.66? I am using > Mac os and trying to generate the java code from  cassandra.thrift by > myself. > > > Cheers, > > Liangzhao >

thrift version

2010-11-09 Thread Liangzhao Zeng
Can anyone tell me what's thrift version using in Cassandra 0.66? I am using Mac os and trying to generate the java code from cassandra.thrift by myself. Cheers, Liangzhao

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread Stu Hood
Interesting, thanks for the info. Perhaps the limitation is that index queries involving multiple clauses are currently implemented using brute-force filtering rather than an index join? The bitmap indexes have native support for this type of join, but it's not being used yet. To confirm: have

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread dragos cernahoschi
I'm running the query on three columns with cardinalities: 22, 17 and 10. Interesting, if combining columns with cardinalities: 22 + 17 => no exception 22 + 10 => no exception 10 + 17 => timed out exception 22 + 17 + 10 => timed out exception On Tue, Nov 9, 2010 at 6:29 PM, Stu Hood wrote: > C

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread Stu Hood
Can you tell me a little bit about your key distribution? How many unique values are indexed (the cardinality)? Until the OrBiC projection I mention on 1472 is implemented, the bitmap secondary indexes will perform terribly for high cardinality datasets. Thanks! -Original Message- Fro

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread Stu Hood
Interesting... thanks for the report! I'll see if I can reproduce. -Original Message- From: "dragos cernahoschi" Sent: Tuesday, November 9, 2010 10:14am To: dev@cassandra.apache.org Subject: Re: CASSANDRA-1472 (bitmap indexes) Meantime the number of SSTable(s) reduced to just 7. Initiall

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread dragos cernahoschi
Meantime the number of SSTable(s) reduced to just 7. Initially the compaction thread suffered the same problem of "too many open files" and couldn't do any compaction. But I'm still not able to run my tests: TimedOutException :( On Tue, Nov 9, 2010 at 5:51 PM, Stu Hood wrote: > Hmm, 500 sstable

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread Stu Hood
Hmm, 500 sstables is definitely a degenerate case: did you disable compaction? By default, Cassandra strives to keep the sstable count below ~32, since accesses to separate sstables require seeks. In this case, the query will seek 500 times to check the secondary index for each sstable: if it f

Build failed in Hudson: Cassandra #591

2010-11-09 Thread Apache Hudson Server
See Changes: [jbellis] merge from 0.7 -- [...truncated 1676 lines...] [junit] Testsuite: org.apache.cassandra.io.sstable.LegacySSTableTest [junit] Tests run: 1, Failures: 0, Errors: 0, Ti

Re: CASSANDRA-1472 (bitmap indexes)

2010-11-09 Thread dragos cernahoschi
There are about 500 SSTables (12GB of data including index data, statistics...) The source data file had about 3GB/26 million rows. I only test with EQ expressions for now. Increasing the file limit resolved the problem, but now I'm getting TimedOutException(s) from thrift when "querying" even wi