Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Brandon Williams
+1


On Sun, Sep 7, 2014 at 9:23 AM, Sylvain Lebresne 
wrote:

> We have no outstanding tickets open and tests are in the green, so I
> propose
> the following artifacts for release as 2.1.0.
>
> sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> Artifacts:
>
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> Staging repository:
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/
>
> The artifacts as well as the debian package are also available here:
> http://people.apache.org/~slebresne
>
> The vote will be open for 72 hours (longer if needed).
>
> [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> [2]: http://goo.gl/uAoTTC (NEWS.txt)
>


Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Aleksey Yeschenko
+1 

-- 
AY


On Sunday, September 7, 2014 at 10:24 AM, Brandon Williams wrote:

> +1
> 
> 
> On Sun, Sep 7, 2014 at 9:23 AM, Sylvain Lebresne  (mailto:sylv...@datastax.com)>
> wrote:
> 
> > We have no outstanding tickets open and tests are in the green, so I
> > propose
> > the following artifacts for release as 2.1.0.
> > 
> > sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> > Git:
> > 
> > http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> > Artifacts:
> > 
> > https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> > Staging repository:
> > https://repository.apache.org/content/repositories/orgapachecassandra-1031/
> > 
> > The artifacts as well as the debian package are also available here:
> > http://people.apache.org/~slebresne
> > 
> > The vote will be open for 72 hours (longer if needed).
> > 
> > [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> > [2]: http://goo.gl/uAoTTC (NEWS.txt)
> > 
> 
> 
> 




Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Jonathan Ellis
+1


On Sun, Sep 7, 2014 at 9:23 AM, Sylvain Lebresne 
wrote:

> We have no outstanding tickets open and tests are in the green, so I
> propose
> the following artifacts for release as 2.1.0.
>
> sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> Artifacts:
>
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> Staging repository:
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/
>
> The artifacts as well as the debian package are also available here:
> http://people.apache.org/~slebresne
>
> The vote will be open for 72 hours (longer if needed).
>
> [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> [2]: http://goo.gl/uAoTTC (NEWS.txt)
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


[VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Sylvain Lebresne
We have no outstanding tickets open and tests are in the green, so I propose
the following artifacts for release as 2.1.0.

sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
Git:
http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
Artifacts:
https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
Staging repository:
https://repository.apache.org/content/repositories/orgapachecassandra-1031/

The artifacts as well as the debian package are also available here:
http://people.apache.org/~slebresne

The vote will be open for 72 hours (longer if needed).

[1]: http://goo.gl/zfCTyc (CHANGES.txt)
[2]: http://goo.gl/uAoTTC (NEWS.txt)


Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Benedict Elliott Smith
I've just commited 7519, which would be nice (but not essential) to include
in 2.1.0, since it has breaking changes to the stress API

Also, not sure if this is just me missing something obvious, and is
probably minor to fix, but ant fails to compile on
org.apache.cassandra.hadoop.cql3.LimitedLocalNodeFirstLocalBalancingPolicy


On Sun, Sep 7, 2014 at 9:27 PM, Jonathan Ellis  wrote:

> +1
>
>
> On Sun, Sep 7, 2014 at 9:23 AM, Sylvain Lebresne 
> wrote:
>
> > We have no outstanding tickets open and tests are in the green, so I
> > propose
> > the following artifacts for release as 2.1.0.
> >
> > sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> > Git:
> >
> >
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> > Artifacts:
> >
> >
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> > Staging repository:
> >
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/
> >
> > The artifacts as well as the debian package are also available here:
> > http://people.apache.org/~slebresne
> >
> > The vote will be open for 72 hours (longer if needed).
> >
> > [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> > [2]: http://goo.gl/uAoTTC (NEWS.txt)
> >
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>


Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Jake Luciani
+1


On Sun, Sep 7, 2014 at 10:23 AM, Sylvain Lebresne 
wrote:

> We have no outstanding tickets open and tests are in the green, so I
> propose
> the following artifacts for release as 2.1.0.
>
> sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> Artifacts:
>
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> Staging repository:
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/
>
> The artifacts as well as the debian package are also available here:
> http://people.apache.org/~slebresne
>
> The vote will be open for 72 hours (longer if needed).
>
> [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> [2]: http://goo.gl/uAoTTC (NEWS.txt)
>



-- 
http://twitter.com/tjake


[contrib] Idea/ Reduced I/O and CPU cost for GET ops

2014-09-07 Thread Mark Papadakis
Greetings,

This heuristic helps us eliminate unnecessary I/O in certain workloads and 
datasets, by often many orders of magnitude. This is description of the 
problems we faced and how we dealt with it — I am pretty certain this can be 
easily implemented on C*, albeit will likely require a new SSTable format that 
can support the semantics described below.

# Example
One of our services, a price comparison service, has many millions of products 
in our datastore, and we access over 100+ rows on a single page request (almost 
all of them in 2-3 MultiGets - those are executed in parallel in our datastore 
implementation). This is fine, and it rarely takes more than 100ms to get back 
responses from all those requests.

Because we, in practice, need to update all key=>value rows multiple times a 
day (merchants tend to update their price every few hours for some odd reason), 
it means that a key’s columns exist in multiple(almost always all) SSTables of 
a CF, and so, we almost always have to merge the final value for each key from 
all those many SSTables, as opposed to only need to access a single SSTable to 
do that.

In fact, for most CFs of this service, we need to merge most of their SSTables 
to get the final CF, because of that same reason (rows update very frequently, 
as opposed to say, a ‘users’ CF where you typically only set the row once on 
creation and very infrequently after that ).
Surely, there  must have been ways to exploit this pattern and access and 
update semantics. (there are).

Our SSTables are partitioned into chunks. One of those chunks is the index 
chunk which holds distinctKeyId:u64 => offset:u32, sorted by distinctKeyId. 
We have a map which allows us to map  distinctKeyId:u64=> data chunk 
region(skip list), so that this offset is an offset into the respective data 
chunk region — this is so that we won’t have to store 64bit offsets there, and 
that saves us 4bytes / row (every ~4GB we track another data chunk region so in 
practice this is a constant operation ).

# How we are mitigating IO access and merge costs 
Anyway, when enabled, with the additional cost of 64bits for each entry in the 
index chunk, instead of keyId:u64=>(offset:u32), we now use 
keyId:u64=>(offset:u32, latestTs:u32, signature:u32). 

For each CF we are about to serialise to an SSTable, we identify the latest 
timestamp of all columns(milliseconds, we need the unix timestamp). Depending 
on the configuration we also do either of:
1. A digest of all column names.  Currently, we use CRC32. When we build the 
SSTable index chunk, we store distinctKeyId:u64 => (dataChunkSegmentOffset:u32, 
latestTimestamp:u32, digest:u32)
2. Compute a mask based on the first 31 distinct column names encountered so 
far. Here is some pseudocode:

Dictionary sstableDistinctColumnNames;
uint32_t mask = 0;

for (const auto &it : cf->columns)
{
const auto name = it.name;

if  (sstableDistinctColumnNames.IsSet(name))
mask|=(1< (dataChunkSegmentOffset:u32, 
latestTimestamp:u32, map:u32).
We also store sstableDistinctColumnsNames in the SSTable header (each SSTable 
has a header chunk where we store KV records).

Each method comes with pros and cons. Though they probably make sense and you 
get where this is going already, will list them later.

# GET response
So for every CF SStable, we do something like this(C++ pseudocode):
struct candidate
{
SSTable *t;
uint64_t offset;
time32_t ts;
uint32_t v;
};

Vector candidates;

for (const auto &table : cf->sstables)
{
time32_t latestTs;
uint32_t v;
const auto actualOffset = table->Offset(key, latestTs, v);

if (!actualOffset)
continue;

candidates.Append({table, actualOffset, latestTs, v});  
}

if (v.IsEmpty())
{
// Nothing here
return;
}

v.Sort([](const candidate &c1, const candidate &c2)
{
return TrivialCmp(c1.offset, c2.offset);
});


Depending on what we decide to store (mask or digest of column names):
1. 
Set seen;
for (const auto &it : v)
{
if (seen.IsSet(it.v)))
{
// We have seen an update for those exact columns already
continue;
}

seen.Add(it.v);
// Unserialize CF, etc, merge
}

That’s all there is to it — the core idea is that we can safely disregard 
SSTable rows if the those exact columns in the CF have been found in an later 
CF found earlier. 

2.
uint32_t seen = 0;

// There is some logic not outlined here, where we need to map from one 
SSTable’s column names to another based on the stored index, (again, 
pseudocode).

for (const auto &it : v)
{

if (it.v && ((seen&v.v) == v))
{
// We don’t care about no columns in this row
continue;
}
seen|=v;
// Unserialize CF, etc, merge
}


Pros and cons
1.
PROS/CONS: Easier to compute, no restriction to first 31 distinct columns

Re: Pointers on writing your own Compaction Strategy

2014-09-07 Thread Tupshin Harper
In addition to what Markus said, take a look at the latest patch in
https://issues.apache.org/jira/browse/CASSANDRA-6602 for a relevant
example.

-Tupshin
On Sep 4, 2014 2:28 PM, "Marcus Eriksson"  wrote:

> 1. create a class that extends AbstractCompactionStrategy (i would keep it
> in-tree while developing to avoid having classpath issues etc)
> 2. Implement the abstract methods
>- getNextBackgroundTask - called when cassandra wants to do a new minor
> (background) compaction - return a CompactionTask with the sstables you
> want compacted
>- getMaximalTask - called when a user triggers a major compaction
>- getUserDefinedTask - when a user triggers a user defined compaction
> from JMX
>- getEstimatedRemainingTasks - return the guessed number of tasks before
> we are "done"
>- getMaxSSTableBytes - if your compaction strategy puts a limit on the
> size of sstables
> 3. Execute this in cqlsh to enable your compaction strategy: ALTER TABLE
> foo WITH compaction = { class: ‘Bar’ }
> 4. Things to think about:
> - make sure you mark sstables as compacting before returning them from
> the compaction strategy (and check the return value!):
>
> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java#L271
> - if you do this on 2.1 - dont mix repaired and unrepaired sstables
> (SSTableReader#isRepaired)
>
> Let me know if you need any more information
>
> /Marcus
>
>
>
> On Thu, Sep 4, 2014 at 6:50 PM, Ghosh, Mainak 
> wrote:
>
> > Hello,
> >
> > I am planning to write a new compaction strategy and I was hoping if
> > anyone can point me to the relevant functions and how they are related in
> > the call hierarchy.
> >
> > Thanks for the help.
> >
> > Regards,
> > Mainak.
> >
>


Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Josh McKenzie
+1

On Sun, Sep 7, 2014 at 9:23 AM, Sylvain Lebresne 
wrote:

> We have no outstanding tickets open and tests are in the green, so I
> propose
> the following artifacts for release as 2.1.0.
>
> sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> Git:
>
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> Artifacts:
>
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> Staging repository:
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/
>
> The artifacts as well as the debian package are also available here:
> http://people.apache.org/~slebresne
>
> The vote will be open for 72 hours (longer if needed).
>
> [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> [2]: http://goo.gl/uAoTTC (NEWS.txt)
>



-- 
Joshua McKenzie
DataStax -- The Apache Cassandra Company


Re: [contrib] Idea/ Reduced I/O and CPU cost for GET ops

2014-09-07 Thread Benedict Elliott Smith
Hi Mark,

This specific heuristic is unlikely to be applied, as (if I've understood
it correctly) it has a very narrow window of utility to only those updates
that hit *exactly* the same clustering columns (cql rows) *and *data
columns, and is not trivial to maintain (either cpu- or memory-wise).
However two variants on this heuristic are already being considered for
inclusion as part of the new sstable format we're introducing in
CASSANDRA-7447 ,
which is an extension of the the heuristic that is already applied at the
whole sstable level.

(1) Per partition, we will store the maximum timestamp (whether or not this
sits in the hash index / key cache, or in the clustering column index part,
is an open question)
 - this permits us to stop looking at files once we have a complete set of
the data we expect to return, i.e. all selected fields for the complete set
of selected rows

(2) Per clustering row, we may store a enough information to construct the
max timestamp for the row
 - this permits us to stop looking at data pages if we know we have all
selected fields for a given row only




On Sun, Sep 7, 2014 at 11:30 PM, Mark Papadakis 
wrote:

> Greetings,
>
> This heuristic helps us eliminate unnecessary I/O in certain workloads and
> datasets, by often many orders of magnitude. This is description of the
> problems we faced and how we dealt with it — I am pretty certain this can
> be easily implemented on C*, albeit will likely require a new SSTable
> format that can support the semantics described below.
>
> # Example
> One of our services, a price comparison service, has many millions of
> products in our datastore, and we access over 100+ rows on a single page
> request (almost all of them in 2-3 MultiGets - those are executed in
> parallel in our datastore implementation). This is fine, and it rarely
> takes more than 100ms to get back responses from all those requests.
>
> Because we, in practice, need to update all key=>value rows multiple times
> a day (merchants tend to update their price every few hours for some odd
> reason), it means that a key’s columns exist in multiple(almost always all)
> SSTables of a CF, and so, we almost always have to merge the final value
> for each key from all those many SSTables, as opposed to only need to
> access a single SSTable to do that.
>
> In fact, for most CFs of this service, we need to merge most of their
> SSTables to get the final CF, because of that same reason (rows update very
> frequently, as opposed to say, a ‘users’ CF where you typically only set
> the row once on creation and very infrequently after that ).
> Surely, there  must have been ways to exploit this pattern and access and
> update semantics. (there are).
>
> Our SSTables are partitioned into chunks. One of those chunks is the index
> chunk which holds distinctKeyId:u64 => offset:u32, sorted by distinctKeyId.
> We have a map which allows us to map  distinctKeyId:u64=> data chunk
> region(skip list), so that this offset is an offset into the respective
> data chunk region — this is so that we won’t have to store 64bit offsets
> there, and that saves us 4bytes / row (every ~4GB we track another data
> chunk region so in practice this is a constant operation ).
>
> # How we are mitigating IO access and merge costs
> Anyway, when enabled, with the additional cost of 64bits for each entry in
> the index chunk, instead of keyId:u64=>(offset:u32), we now use
> keyId:u64=>(offset:u32, latestTs:u32, signature:u32).
>
> For each CF we are about to serialise to an SSTable, we identify the
> latest timestamp of all columns(milliseconds, we need the unix timestamp).
> Depending on the configuration we also do either of:
> 1. A digest of all column names.  Currently, we use CRC32. When we build
> the SSTable index chunk, we store distinctKeyId:u64 =>
> (dataChunkSegmentOffset:u32, latestTimestamp:u32, digest:u32)
> 2. Compute a mask based on the first 31 distinct column names encountered
> so far. Here is some pseudocode:
>
> Dictionary sstableDistinctColumnNames;
> uint32_t mask = 0;
>
> for (const auto &it : cf->columns)
> {
> const auto name = it.name;
>
> if  (sstableDistinctColumnNames.IsSet(name))
> mask|=(1< else if (sstableDistinctColumnNames.Size() == 31)
> {
> // Cannot track this column, so we won’t be able to do
> much about this row
> mask = 0;
> }
> else
> {
> mask|=(1< sstableDistinctColumnsNames.Set(name,
> sstableDistinctColumnNames.Size());
> }
>
> and so, we again store distinctKeyId:u64 => (dataChunkSegmentOffset:u32,
> latestTimestamp:u32, map:u32).
> We also store sstableDistinctColumnsNames in the SSTable header (each
> SSTable has a header chunk where we store KV records).
>
> Each method comes with pros and cons. Though they probably make sense and
> you get where this is going already, wi

Re: [contrib] Idea/ Reduced I/O and CPU cost for GET ops

2014-09-07 Thread Mark Papadakis
Hi Benedict,


> On Sep 8, 2014, at 4:01 AM, Benedict Elliott Smith 
>  wrote:
> 
> Hi Mark,
> 
> This specific heuristic is unlikely to be applied, as (if I've understood
> it correctly) it has a very narrow window of utility to only those updates
> that hit *exactly* the same clustering columns (cql rows) *and *data
> columns,
Well, method #1 indeed applies to updates that pertain to exactly the same 
columns, like you said. That is to say, if you go back to the products example, 
if you end up updating e.g (price, title) every day and therefore end up with a 
dozen SSTables, where in one SSTable you have all properties for the product 
(merchant info, category, image, .. ) and in almost all other you have an 
update for just (price, title) - this helps you avoid I/O and CPU costs by only 
having to consider 2 SSTables (index chunk segments are almost always cached).
Method #2 does not have this requirement. It works even if you don’t have the 
exact same set of columns in the row on each SSTable, however you are limited 
to the first 31 distinct ones on per SSTable basis.


> and is not trivial to maintain (either cpu- or memory-wise).
> However two variants on this heuristic are already being considered for
> inclusion as part of the new sstable format we're introducing in
> CASSANDRA-7447 ,
> which is an extension of the the heuristic that is already applied at the
> whole sstable level.
> 
Very interesting. We have made some similar choices (albeit differently, and 
for different reasons — e.g timestamp/ttls compression, per file layout index, 
etc). The described heuristics, in practice and for the workloads and update 
and retrieval patterns I described, are extremely effective.  They are also 
particularly ‘cheap’ (turned out) to compute and store, and add very little 
overhead to the GET logic impl. Again, those are configured on a per CF basis 
(either method is selected) - so we only do this where we know it makes sense 
to do it. 
Thank you for the link; some great ideas there. This looks like a great update 
to C*.


> (1) Per partition, we will store the maximum timestamp (whether or not this
> sits in the hash index / key cache, or in the clustering column index part,
> is an open question)
> - this permits us to stop looking at files once we have a complete set of
> the data we expect to return, i.e. all selected fields for the complete set
> of selected rows
> 
> (2) Per clustering row, we may store a enough information to construct the
> max timestamp for the row
> - this permits us to stop looking at data pages if we know we have all
> selected fields for a given row only
> 
> 
> 
> 
> On Sun, Sep 7, 2014 at 11:30 PM, Mark Papadakis 
> wrote:
> 
>> Greetings,
>> 
>> This heuristic helps us eliminate unnecessary I/O in certain workloads and
>> datasets, by often many orders of magnitude. This is description of the
>> problems we faced and how we dealt with it — I am pretty certain this can
>> be easily implemented on C*, albeit will likely require a new SSTable
>> format that can support the semantics described below.
>> 
>> # Example
>> One of our services, a price comparison service, has many millions of
>> products in our datastore, and we access over 100+ rows on a single page
>> request (almost all of them in 2-3 MultiGets - those are executed in
>> parallel in our datastore implementation). This is fine, and it rarely
>> takes more than 100ms to get back responses from all those requests.
>> 
>> Because we, in practice, need to update all key=>value rows multiple times
>> a day (merchants tend to update their price every few hours for some odd
>> reason), it means that a key’s columns exist in multiple(almost always all)
>> SSTables of a CF, and so, we almost always have to merge the final value
>> for each key from all those many SSTables, as opposed to only need to
>> access a single SSTable to do that.
>> 
>> In fact, for most CFs of this service, we need to merge most of their
>> SSTables to get the final CF, because of that same reason (rows update very
>> frequently, as opposed to say, a ‘users’ CF where you typically only set
>> the row once on creation and very infrequently after that ).
>> Surely, there  must have been ways to exploit this pattern and access and
>> update semantics. (there are).
>> 
>> Our SSTables are partitioned into chunks. One of those chunks is the index
>> chunk which holds distinctKeyId:u64 => offset:u32, sorted by distinctKeyId.
>> We have a map which allows us to map  distinctKeyId:u64=> data chunk
>> region(skip list), so that this offset is an offset into the respective
>> data chunk region — this is so that we won’t have to store 64bit offsets
>> there, and that saves us 4bytes / row (every ~4GB we track another data
>> chunk region so in practice this is a constant operation ).
>> 
>> # How we are mitigating IO access and merge costs
>> Anyway, when enabled, with the additional cost of 64bi

Re: [VOTE] Release Apache Cassandra 2.1.0

2014-09-07 Thread Jason Brown
+1

On Sun, Sep 7, 2014 at 4:17 PM, Josh McKenzie 
wrote:

> +1
>
> On Sun, Sep 7, 2014 at 9:23 AM, Sylvain Lebresne 
> wrote:
>
> > We have no outstanding tickets open and tests are in the green, so I
> > propose
> > the following artifacts for release as 2.1.0.
> >
> > sha1: c6a2c65a75adea9a62896269da98dd036c8e57f3
> > Git:
> >
> >
> http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/2.1.0-tentative
> > Artifacts:
> >
> >
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/org/apache/cassandra/apache-cassandra/2.1.0/
> > Staging repository:
> >
> https://repository.apache.org/content/repositories/orgapachecassandra-1031/
> >
> > The artifacts as well as the debian package are also available here:
> > http://people.apache.org/~slebresne
> >
> > The vote will be open for 72 hours (longer if needed).
> >
> > [1]: http://goo.gl/zfCTyc (CHANGES.txt)
> > [2]: http://goo.gl/uAoTTC (NEWS.txt)
> >
>
>
>
> --
> Joshua McKenzie
> DataStax -- The Apache Cassandra Company
>