Nodes failed to bootstrap, no nodetool info but system.peer populated.

2015-05-11 Thread Carlos Rolo
Hi all,

I just wanted to know if this should be worth filling a bug or not
(Couldn't find any similar).

I have a 3 node cluster (2.0.14). Decided to add 3 new ones. 2 failed
because of hardware failure (virtualized environment).

The process was automated, so what was supposed to happen was:

- Node 4 joins
- wait until status is UN and then 2min more
- Node 5 joins
- wait until status is UN and then 2min more
- Node 6 joins
- wait until status is UN and then 2min more

What happened:
- Node 4 joins
- Wait...
- Node 5 joins
- VM fails while node is starting.
- VM 6 starts, no node with UN, waits 2min
- Node 6 joins
- VM fails while node is starting.

After this, nodetool reports 4 nodes all UN
While trying an application (Datastax Java Driver 2.1) the debug log
reports that it tries to connect to Node 5 and 6 and fails.

Checking system.peers table, I see both nodes there. So I tried "nodetool
removenode " with the IDs in the table.

It blows up with the following exception:
Exception in thread "main" java.lang.UnsupportedOperationException: Host ID
not found.

Then I decided to do the following:
DELETE from peers where ID in (ID1, ID2);

All good, cluster still happy and driver not complaining anymore.
Is this expected behavior?



Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

-- 


--





Re: Stream sstables hosted on a node from client using streaming protocol

2015-05-11 Thread Yuki Morishita
Yeah, at least you need to set up Schema for loading SSTable.
If you find anything that we can improve, please file JIRA.

On Sun, May 10, 2015 at 5:27 AM, Pierre Devops  wrote:
> OK so I know a little more now, it's not doable in client mode ATM because
> it rely to much on server side stuff.
>
> It needs to initialize ColumnFamilyStore and use an instance of it
> afterwards, which will require to much server-side configuration
> initialization.
>
> Secondly the way it streams is inefficient because it will deserialize the
> streamed sstable to rebuild a new sstable in SSTableWriter.appendFromStream
> (needed to rebuild index & other compoment)  while I just need to copy the
> -Data- file on the disk.
>
> So I think I'm going to provide my own IncomingFileMessage and its own
> deserializer.
>
>
>
> 2015-05-09 23:32 GMT+02:00 Pierre Devops :
>
>> Thanks yuki, copying SSLTableLoader was the first thing I try, but without
>> success.
>>
>> I checked BulkLoadConnectionFactory (
>> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/BulkLoadConnectionFactory.java)
>> and I don't see what it provide over the DefaultConnectionFactory that can
>> help me more in this case.
>>
>> Without setting up a custom connection factory, it manages already to
>> connect to the node, and send a streaming request (I see it in cassandra
>> logs).
>>
>> INFO  21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899 ID#0]
>>> Creating new streaming plan for SST Import
>>> INFO  21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899, ID#0]
>>> Received streaming plan for SST Import
>>> INFO  21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899, ID#0]
>>> Received streaming plan for SST Import
>>> INFO  21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899 ID#0]
>>> Prepare completed. Receiving 0 files(0 bytes), sending 2 files(4083518
>>> bytes)
>>> INFO  21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899] Session
>>> with /127.0.0.1 is complete
>>> WARN  21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899] Stream
>>> failed
>>> ERROR 21:16:25 [Stream #a630d860-f690-11e4-a2d0-adca0d5ee899] Streaming
>>> error occurred
>>
>>
>>
>> So it looks like my client is receiving two message in its
>> ConnectionHandler loop (
>> https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/streaming/ConnectionHandler.java#L251)
>> , the first one is a PREPARE_MESSAGE type with a StreamSummary indicating
>> the good number of files.
>>
>> But the second message it receives, it fails to deserialize. So I debugged
>> and streamed what was coming from this socket, and it was the sstables. but
>> I don't know why it fails deseriliazion of message type.
>>
>>



-- 
Yuki Morishita
 t:yukim (http://twitter.com/yukim)


Re: Nodes failed to bootstrap, no nodetool info but system.peer populated.

2015-05-11 Thread Brandon Williams
https://issues.apache.org/jira/browse/CASSANDRA-9180

On Mon, May 11, 2015 at 4:17 AM, Carlos Rolo  wrote:

> Hi all,
>
> I just wanted to know if this should be worth filling a bug or not
> (Couldn't find any similar).
>
> I have a 3 node cluster (2.0.14). Decided to add 3 new ones. 2 failed
> because of hardware failure (virtualized environment).
>
> The process was automated, so what was supposed to happen was:
>
> - Node 4 joins
> - wait until status is UN and then 2min more
> - Node 5 joins
> - wait until status is UN and then 2min more
> - Node 6 joins
> - wait until status is UN and then 2min more
>
> What happened:
> - Node 4 joins
> - Wait...
> - Node 5 joins
> - VM fails while node is starting.
> - VM 6 starts, no node with UN, waits 2min
> - Node 6 joins
> - VM fails while node is starting.
>
> After this, nodetool reports 4 nodes all UN
> While trying an application (Datastax Java Driver 2.1) the debug log
> reports that it tries to connect to Node 5 and 6 and fails.
>
> Checking system.peers table, I see both nodes there. So I tried "nodetool
> removenode " with the IDs in the table.
>
> It blows up with the following exception:
> Exception in thread "main" java.lang.UnsupportedOperationException: Host ID
> not found.
>
> Then I decided to do the following:
> DELETE from peers where ID in (ID1, ID2);
>
> All good, cluster still happy and driver not complaining anymore.
> Is this expected behavior?
>
>
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: *
> linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> www.pythian.com
>
> --
>
>
> --
>
>
>
>


Re: Nodes failed to bootstrap, no nodetool info but system.peer populated.

2015-05-11 Thread Carlos Rolo
Thanks!

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Mon, May 11, 2015 at 4:29 PM, Brandon Williams  wrote:

> https://issues.apache.org/jira/browse/CASSANDRA-9180
>
> On Mon, May 11, 2015 at 4:17 AM, Carlos Rolo  wrote:
>
> > Hi all,
> >
> > I just wanted to know if this should be worth filling a bug or not
> > (Couldn't find any similar).
> >
> > I have a 3 node cluster (2.0.14). Decided to add 3 new ones. 2 failed
> > because of hardware failure (virtualized environment).
> >
> > The process was automated, so what was supposed to happen was:
> >
> > - Node 4 joins
> > - wait until status is UN and then 2min more
> > - Node 5 joins
> > - wait until status is UN and then 2min more
> > - Node 6 joins
> > - wait until status is UN and then 2min more
> >
> > What happened:
> > - Node 4 joins
> > - Wait...
> > - Node 5 joins
> > - VM fails while node is starting.
> > - VM 6 starts, no node with UN, waits 2min
> > - Node 6 joins
> > - VM fails while node is starting.
> >
> > After this, nodetool reports 4 nodes all UN
> > While trying an application (Datastax Java Driver 2.1) the debug log
> > reports that it tries to connect to Node 5 and 6 and fails.
> >
> > Checking system.peers table, I see both nodes there. So I tried "nodetool
> > removenode " with the IDs in the table.
> >
> > It blows up with the following exception:
> > Exception in thread "main" java.lang.UnsupportedOperationException: Host
> ID
> > not found.
> >
> > Then I decided to do the following:
> > DELETE from peers where ID in (ID1, ID2);
> >
> > All good, cluster still happy and driver not complaining anymore.
> > Is this expected behavior?
> >
> >
> >
> > Regards,
> >
> > Carlos Juzarte Rolo
> > Cassandra Consultant
> >
> > Pythian - Love your data
> >
> > rolo@pythian | Twitter: cjrolo | Linkedin: *
> > linkedin.com/in/carlosjuzarterolo
> > *
> > Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> > www.pythian.com
> >
> > --
> >
> >
> > --
> >
> >
> >
> >
>

-- 


--





Wrap around CQL queries for token ranges?

2015-05-11 Thread Brian O'Neill

I was doing some testing around data locality today (and adding it to our
distributed processing layer).
I retrieved all of the TokenRanges back using:
tokenRanges = metadata.getTokenRanges(keyspace, localhost)


And when I spun through the token ranges returned, I ended up missing
records.  
The root cause was the ³edge case² where the ring wraps around.

It generated the following CQL query: (using the last token range)

cqlsh> SELECT token(id),id,name FROM test_keyspace.test_table WHERE
token(id)>8743874685407455894 AND token(id)<=-8851282698028303387;

(0 rows)

cqlsh> SELECT token(id),id,name FROM test_keyspace.test_table WHERE
token(id)<=-8851282698028303387 AND token(id)>-9223372036854775808;

 token(id)| id | name
--++
 -9157060164899361011 | 23 | name23
 -9108684050423740263 | 53 | name53
 -9084883821289052775 | 91 | name91
(3 rows)

NOTE: If I use Long.MAX_VALUE instead, I get the records.

I can hack this at the app layer, to issue separate queries for the wrap
around case, but I wonder if CQL should support wrap around queries???

-brian

---
Brian O'Neill 
Chief Technology Officer
Health Market Science, a LexisNexis Company
215.588.6024 Mobile € @boneill42 


This information transmitted in this email message is for the intended
recipient only and may contain confidential and/or privileged material. If
you received this email in error and are not the intended recipient, or the
person responsible to deliver it to the intended recipient, please contact
the sender at the email above and delete this email and any attachments and
destroy any copies thereof. Any review, retransmission, dissemination,
copying or other use of, or taking any action in reliance upon, this
information by persons or entities other than the intended recipient is
strictly prohibited.
 




Re: Nodes failed to bootstrap, no nodetool info but system.peer populated.

2015-05-11 Thread Sebastian Estevez
I hit this issue today with the c# driver. I still think the drivers should
handle peers inconsistencies better and maybe even output warnings about
them.

I opened CSHARP-296, @rolo, it's probably a good idea to open a similar one
for java.
On May 11, 2015 11:24 AM, "Carlos Rolo"  wrote:

> Thanks!
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: *
> linkedin.com/in/carlosjuzarterolo
> *
> Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> www.pythian.com
>
> On Mon, May 11, 2015 at 4:29 PM, Brandon Williams 
> wrote:
>
> > https://issues.apache.org/jira/browse/CASSANDRA-9180
> >
> > On Mon, May 11, 2015 at 4:17 AM, Carlos Rolo  wrote:
> >
> > > Hi all,
> > >
> > > I just wanted to know if this should be worth filling a bug or not
> > > (Couldn't find any similar).
> > >
> > > I have a 3 node cluster (2.0.14). Decided to add 3 new ones. 2 failed
> > > because of hardware failure (virtualized environment).
> > >
> > > The process was automated, so what was supposed to happen was:
> > >
> > > - Node 4 joins
> > > - wait until status is UN and then 2min more
> > > - Node 5 joins
> > > - wait until status is UN and then 2min more
> > > - Node 6 joins
> > > - wait until status is UN and then 2min more
> > >
> > > What happened:
> > > - Node 4 joins
> > > - Wait...
> > > - Node 5 joins
> > > - VM fails while node is starting.
> > > - VM 6 starts, no node with UN, waits 2min
> > > - Node 6 joins
> > > - VM fails while node is starting.
> > >
> > > After this, nodetool reports 4 nodes all UN
> > > While trying an application (Datastax Java Driver 2.1) the debug log
> > > reports that it tries to connect to Node 5 and 6 and fails.
> > >
> > > Checking system.peers table, I see both nodes there. So I tried
> "nodetool
> > > removenode " with the IDs in the table.
> > >
> > > It blows up with the following exception:
> > > Exception in thread "main" java.lang.UnsupportedOperationException:
> Host
> > ID
> > > not found.
> > >
> > > Then I decided to do the following:
> > > DELETE from peers where ID in (ID1, ID2);
> > >
> > > All good, cluster still happy and driver not complaining anymore.
> > > Is this expected behavior?
> > >
> > >
> > >
> > > Regards,
> > >
> > > Carlos Juzarte Rolo
> > > Cassandra Consultant
> > >
> > > Pythian - Love your data
> > >
> > > rolo@pythian | Twitter: cjrolo | Linkedin: *
> > > linkedin.com/in/carlosjuzarterolo
> > > *
> > > Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> > > www.pythian.com
> > >
> > > --
> > >
> > >
> > > --
> > >
> > >
> > >
> > >
> >
>
> --
>
>
> --
>
>
>
>


Re: Wrap around CQL queries for token ranges?

2015-05-11 Thread Brian O'Neill
Looks like the java-driver supplies the hack I need.  (TokenRange.unwrap)

I¹ll leave it to you guys to decide if it is more elegant to support
wrapping natively in CQL.

-brian

---
Brian O'Neill 
Chief Technology Officer
Health Market Science, a LexisNexis Company
215.588.6024 Mobile € @boneill42 


This information transmitted in this email message is for the intended
recipient only and may contain confidential and/or privileged material. If
you received this email in error and are not the intended recipient, or the
person responsible to deliver it to the intended recipient, please contact
the sender at the email above and delete this email and any attachments and
destroy any copies thereof. Any review, retransmission, dissemination,
copying or other use of, or taking any action in reliance upon, this
information by persons or entities other than the intended recipient is
strictly prohibited.
 


From:  Brian O'Neill 
Date:  Monday, May 11, 2015 at 12:32 PM
To:  "dev@cassandra.apache.org" 
Subject:  Wrap around CQL queries for token ranges?


I was doing some testing around data locality today (and adding it to our
distributed processing layer).
I retrieved all of the TokenRanges back using:
tokenRanges = metadata.getTokenRanges(keyspace, localhost)


And when I spun through the token ranges returned, I ended up missing
records.  
The root cause was the ³edge case² where the ring wraps around.

It generated the following CQL query: (using the last token range)

cqlsh> SELECT token(id),id,name FROM test_keyspace.test_table WHERE
token(id)>8743874685407455894 AND token(id)<=-8851282698028303387;

(0 rows)

cqlsh> SELECT token(id),id,name FROM test_keyspace.test_table WHERE
token(id)<=-8851282698028303387 AND token(id)>-9223372036854775808;

 token(id)| id | name
--++
 -9157060164899361011 | 23 | name23
 -9108684050423740263 | 53 | name53
 -9084883821289052775 | 91 | name91
(3 rows)

NOTE: If I use Long.MAX_VALUE instead, I get the records.

I can hack this at the app layer, to issue separate queries for the wrap
around case, but I wonder if CQL should support wrap around queries???

-brian

---
Brian O'Neill 
Chief Technology Officer
Health Market Science, a LexisNexis Company
215.588.6024 Mobile € @boneill42 


This information transmitted in this email message is for the intended
recipient only and may contain confidential and/or privileged material. If
you received this email in error and are not the intended recipient, or the
person responsible to deliver it to the intended recipient, please contact
the sender at the email above and delete this email and any attachments and
destroy any copies thereof. Any review, retransmission, dissemination,
copying or other use of, or taking any action in reliance upon, this
information by persons or entities other than the intended recipient is
strictly prohibited.
 




Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jonathan Ellis
On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko 
wrote:

> 3.0, however, will require a stabilisation period, just by the nature of
> it. It might seem like 2.2 and 3.0 are closer to each other than 2.1 and
> 2.2 are, if you go purely by the feature list, but in fact the opposite is
> true.
>

You are probably right.  But let me push back on some of the extra work
you're proposing just a little:

1) 2.0.x branch goes EOL when 3.0 is out, as planned
>

3.0 was, however unrealistically, planned for April.  And it's moving the
goalposts to say the plan was always to keep 2.0.x for three major
releases; the plan was to EOL with "the next major release after 2.1"
whether that was called 3.0 or not.  So I think EOLing 2.0.x when 2.2 comes
out is reasonable, especially considering that 2.2 is realistically a month
or two away even if we can get a beta out this week.

2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new
> storage engine
>

Yes.


> 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade to
> 2.2, get the same stability as with 2.1.7, plus a few new features
>

If push comes to shove I'm okay being ambiguous here, but can we just say
"when 3.0 is released we EOL 2.1?"

P.S. The area I'm most concerned about introducing destabilizing changes in
2.2 is commitlog; I will follow up to make sure we have a solid QA plan
there.

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Wrap around CQL queries for token ranges?

2015-05-11 Thread Aleksey Yeschenko
That was an intentional decision on our side. Have a look at 
https://issues.apache.org/jira/browse/CASSANDRA-5573 - Sylvain’s comment in 
particular.

-- 
AY

On May 11, 2015 at 20:05:54, Brian O'Neill (b...@alumni.brown.edu) wrote:

Looks like the java-driver supplies the hack I need. (TokenRange.unwrap)  

I¹ll leave it to you guys to decide if it is more elegant to support  
wrapping natively in CQL.  

-brian  

---  
Brian O'Neill  
Chief Technology Officer  
Health Market Science, a LexisNexis Company  
215.588.6024 Mobile € @boneill42   


This information transmitted in this email message is for the intended  
recipient only and may contain confidential and/or privileged material. If  
you received this email in error and are not the intended recipient, or the  
person responsible to deliver it to the intended recipient, please contact  
the sender at the email above and delete this email and any attachments and  
destroy any copies thereof. Any review, retransmission, dissemination,  
copying or other use of, or taking any action in reliance upon, this  
information by persons or entities other than the intended recipient is  
strictly prohibited.  



From: Brian O'Neill   
Date: Monday, May 11, 2015 at 12:32 PM  
To: "dev@cassandra.apache.org"   
Subject: Wrap around CQL queries for token ranges?  


I was doing some testing around data locality today (and adding it to our  
distributed processing layer).  
I retrieved all of the TokenRanges back using:  
tokenRanges = metadata.getTokenRanges(keyspace, localhost)  


And when I spun through the token ranges returned, I ended up missing  
records.  
The root cause was the ³edge case² where the ring wraps around.  

It generated the following CQL query: (using the last token range)  

cqlsh> SELECT token(id),id,name FROM test_keyspace.test_table WHERE  
token(id)>8743874685407455894 AND token(id)<=-8851282698028303387;  

(0 rows)  

cqlsh> SELECT token(id),id,name FROM test_keyspace.test_table WHERE  
token(id)<=-8851282698028303387 AND token(id)>-9223372036854775808;  

token(id) | id | name  
--++  
-9157060164899361011 | 23 | name23  
-9108684050423740263 | 53 | name53  
-9084883821289052775 | 91 | name91  
(3 rows)  

NOTE: If I use Long.MAX_VALUE instead, I get the records.  

I can hack this at the app layer, to issue separate queries for the wrap  
around case, but I wonder if CQL should support wrap around queries???  

-brian  

---  
Brian O'Neill  
Chief Technology Officer  
Health Market Science, a LexisNexis Company  
215.588.6024 Mobile € @boneill42   


This information transmitted in this email message is for the intended  
recipient only and may contain confidential and/or privileged material. If  
you received this email in error and are not the intended recipient, or the  
person responsible to deliver it to the intended recipient, please contact  
the sender at the email above and delete this email and any attachments and  
destroy any copies thereof. Any review, retransmission, dissemination,  
copying or other use of, or taking any action in reliance upon, this  
information by persons or entities other than the intended recipient is  
strictly prohibited.  





Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Aleksey Yeschenko
> So I think EOLing 2.0.x when 2.2 comes 
> out is reasonable, especially considering that 2.2 is realistically a month 
> or two away even if we can get a beta out this week. 

Given how long 2.0.x has been alive now, and the stability of 2.1.x at the 
moment, I’d say it’s fair enough to EOL 2.0 as soon as 2.2 gets out. Can’t 
argue here.

> If push comes to shove I'm okay being ambiguous here, but can we just say 
> "when 3.0 is released we EOL 2.1?" 

Under our current projections, that’ll be exactly “a few months after 2.2 is 
released”, so I’m again fine with it.

> P.S. The area I'm most concerned about introducing destabilizing changes in 
> 2.2 is commitlog

So long as you don’t you compressed CL, you should be solid. You are probably 
solid even if you do use compressed CL.

Here are my only concerns:

1. New authz are not opt-in. If a user implements their own custom 
authenticator or authorized, they’d have to upgrade them sooner. The test 
coverage for new authnz, however, is better than the coverage we used to have 
before.

2. CQL2 is gone from 2.2. Might force those who use it migrate faster. In 
practice, however, I highly doubt that anybody using CQL2 is also someone who’d 
already switch to 2.1.x or 2.2.x.


-- 
AY

On May 11, 2015 at 21:12:26, Jonathan Ellis (jbel...@gmail.com) wrote:

On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko   
wrote:  

> 3.0, however, will require a stabilisation period, just by the nature of  
> it. It might seem like 2.2 and 3.0 are closer to each other than 2.1 and  
> 2.2 are, if you go purely by the feature list, but in fact the opposite is  
> true.  
>  

You are probably right. But let me push back on some of the extra work  
you're proposing just a little:  

1) 2.0.x branch goes EOL when 3.0 is out, as planned  
>  

3.0 was, however unrealistically, planned for April. And it's moving the  
goalposts to say the plan was always to keep 2.0.x for three major  
releases; the plan was to EOL with "the next major release after 2.1"  
whether that was called 3.0 or not. So I think EOLing 2.0.x when 2.2 comes  
out is reasonable, especially considering that 2.2 is realistically a month  
or two away even if we can get a beta out this week.  

2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new  
> storage engine  
>  

Yes.  


> 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade to  
> 2.2, get the same stability as with 2.1.7, plus a few new features  
>  

If push comes to shove I'm okay being ambiguous here, but can we just say  
"when 3.0 is released we EOL 2.1?"  

P.S. The area I'm most concerned about introducing destabilizing changes in  
2.2 is commitlog; I will follow up to make sure we have a solid QA plan  
there.  

--  
Jonathan Ellis  
Project Chair, Apache Cassandra  
co-founder, http://www.datastax.com  
@spyced  


Re: Nodes failed to bootstrap, no nodetool info but system.peer populated.

2015-05-11 Thread Carlos Rolo
Thanks also!

I did it, JAVA-761 
created!

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Mon, May 11, 2015 at 6:48 PM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> I hit this issue today with the c# driver. I still think the drivers should
> handle peers inconsistencies better and maybe even output warnings about
> them.
>
> I opened CSHARP-296, @rolo, it's probably a good idea to open a similar one
> for java.
> On May 11, 2015 11:24 AM, "Carlos Rolo"  wrote:
>
> > Thanks!
> >
> > Regards,
> >
> > Carlos Juzarte Rolo
> > Cassandra Consultant
> >
> > Pythian - Love your data
> >
> > rolo@pythian | Twitter: cjrolo | Linkedin: *
> > linkedin.com/in/carlosjuzarterolo
> > *
> > Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> > www.pythian.com
> >
> > On Mon, May 11, 2015 at 4:29 PM, Brandon Williams 
> > wrote:
> >
> > > https://issues.apache.org/jira/browse/CASSANDRA-9180
> > >
> > > On Mon, May 11, 2015 at 4:17 AM, Carlos Rolo  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I just wanted to know if this should be worth filling a bug or not
> > > > (Couldn't find any similar).
> > > >
> > > > I have a 3 node cluster (2.0.14). Decided to add 3 new ones. 2 failed
> > > > because of hardware failure (virtualized environment).
> > > >
> > > > The process was automated, so what was supposed to happen was:
> > > >
> > > > - Node 4 joins
> > > > - wait until status is UN and then 2min more
> > > > - Node 5 joins
> > > > - wait until status is UN and then 2min more
> > > > - Node 6 joins
> > > > - wait until status is UN and then 2min more
> > > >
> > > > What happened:
> > > > - Node 4 joins
> > > > - Wait...
> > > > - Node 5 joins
> > > > - VM fails while node is starting.
> > > > - VM 6 starts, no node with UN, waits 2min
> > > > - Node 6 joins
> > > > - VM fails while node is starting.
> > > >
> > > > After this, nodetool reports 4 nodes all UN
> > > > While trying an application (Datastax Java Driver 2.1) the debug log
> > > > reports that it tries to connect to Node 5 and 6 and fails.
> > > >
> > > > Checking system.peers table, I see both nodes there. So I tried
> > "nodetool
> > > > removenode " with the IDs in the table.
> > > >
> > > > It blows up with the following exception:
> > > > Exception in thread "main" java.lang.UnsupportedOperationException:
> > Host
> > > ID
> > > > not found.
> > > >
> > > > Then I decided to do the following:
> > > > DELETE from peers where ID in (ID1, ID2);
> > > >
> > > > All good, cluster still happy and driver not complaining anymore.
> > > > Is this expected behavior?
> > > >
> > > >
> > > >
> > > > Regards,
> > > >
> > > > Carlos Juzarte Rolo
> > > > Cassandra Consultant
> > > >
> > > > Pythian - Love your data
> > > >
> > > > rolo@pythian | Twitter: cjrolo | Linkedin: *
> > > > linkedin.com/in/carlosjuzarterolo
> > > > *
> > > > Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
> > > > www.pythian.com
> > > >
> > > > --
> > > >
> > > >
> > > > --
> > > >
> > > >
> > > >
> > > >
> > >
> >
> > --
> >
> >
> > --
> >
> >
> >
> >
>

-- 


--





Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jonathan Ellis
Sounds good.  I will add the new version to Jira.

Planned tickets to block 2.2 beta for:

#8374
#8984
#9190

Any others?  (If it's not code complete today we should not block for it.)


On Mon, May 11, 2015 at 1:59 PM, Aleksey Yeschenko 
wrote:

> > So I think EOLing 2.0.x when 2.2 comes
> > out is reasonable, especially considering that 2.2 is realistically a
> month
> > or two away even if we can get a beta out this week.
>
> Given how long 2.0.x has been alive now, and the stability of 2.1.x at the
> moment, I’d say it’s fair enough to EOL 2.0 as soon as 2.2 gets out. Can’t
> argue here.
>
> > If push comes to shove I'm okay being ambiguous here, but can we just
> say
> > "when 3.0 is released we EOL 2.1?"
>
> Under our current projections, that’ll be exactly “a few months after 2.2
> is released”, so I’m again fine with it.
>
> > P.S. The area I'm most concerned about introducing destabilizing changes
> in
> > 2.2 is commitlog
>
> So long as you don’t you compressed CL, you should be solid. You are
> probably solid even if you do use compressed CL.
>
> Here are my only concerns:
>
> 1. New authz are not opt-in. If a user implements their own custom
> authenticator or authorized, they’d have to upgrade them sooner. The test
> coverage for new authnz, however, is better than the coverage we used to
> have before.
>
> 2. CQL2 is gone from 2.2. Might force those who use it migrate faster. In
> practice, however, I highly doubt that anybody using CQL2 is also someone
> who’d already switch to 2.1.x or 2.2.x.
>
>
> --
> AY
>
> On May 11, 2015 at 21:12:26, Jonathan Ellis (jbel...@gmail.com) wrote:
>
> On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko 
> wrote:
>
> > 3.0, however, will require a stabilisation period, just by the nature of
> > it. It might seem like 2.2 and 3.0 are closer to each other than 2.1 and
> > 2.2 are, if you go purely by the feature list, but in fact the opposite
> is
> > true.
> >
>
> You are probably right. But let me push back on some of the extra work
> you're proposing just a little:
>
> 1) 2.0.x branch goes EOL when 3.0 is out, as planned
> >
>
> 3.0 was, however unrealistically, planned for April. And it's moving the
> goalposts to say the plan was always to keep 2.0.x for three major
> releases; the plan was to EOL with "the next major release after 2.1"
> whether that was called 3.0 or not. So I think EOLing 2.0.x when 2.2 comes
> out is reasonable, especially considering that 2.2 is realistically a month
> or two away even if we can get a beta out this week.
>
> 2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new
> > storage engine
> >
>
> Yes.
>
>
> > 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade to
> > 2.2, get the same stability as with 2.1.7, plus a few new features
> >
>
> If push comes to shove I'm okay being ambiguous here, but can we just say
> "when 3.0 is released we EOL 2.1?"
>
> P.S. The area I'm most concerned about introducing destabilizing changes in
> 2.2 is commitlog; I will follow up to make sure we have a solid QA plan
> there.
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Brian Hess
One thing that does jump out at me, though, is about CQL2.  As much as we
have advised against using cassandra-jdbc, I have encountered a few that
actually have used that as an integration point.  I believe that
cassandra-jdbc is CQL2-based, which is the main reason we have been
advising folks against it.

Can we just confirm that there isn't in fact widespread use of CQL2-based
cassandra-jdbc?  That just jumps out at me.

On Mon, May 11, 2015 at 2:59 PM, Aleksey Yeschenko 
wrote:

> > So I think EOLing 2.0.x when 2.2 comes
> > out is reasonable, especially considering that 2.2 is realistically a
> month
> > or two away even if we can get a beta out this week.
>
> Given how long 2.0.x has been alive now, and the stability of 2.1.x at the
> moment, I’d say it’s fair enough to EOL 2.0 as soon as 2.2 gets out. Can’t
> argue here.
>
> > If push comes to shove I'm okay being ambiguous here, but can we just
> say
> > "when 3.0 is released we EOL 2.1?"
>
> Under our current projections, that’ll be exactly “a few months after 2.2
> is released”, so I’m again fine with it.
>
> > P.S. The area I'm most concerned about introducing destabilizing changes
> in
> > 2.2 is commitlog
>
> So long as you don’t you compressed CL, you should be solid. You are
> probably solid even if you do use compressed CL.
>
> Here are my only concerns:
>
> 1. New authz are not opt-in. If a user implements their own custom
> authenticator or authorized, they’d have to upgrade them sooner. The test
> coverage for new authnz, however, is better than the coverage we used to
> have before.
>
> 2. CQL2 is gone from 2.2. Might force those who use it migrate faster. In
> practice, however, I highly doubt that anybody using CQL2 is also someone
> who’d already switch to 2.1.x or 2.2.x.
>
>
> --
> AY
>
> On May 11, 2015 at 21:12:26, Jonathan Ellis (jbel...@gmail.com) wrote:
>
> On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko 
> wrote:
>
> > 3.0, however, will require a stabilisation period, just by the nature of
> > it. It might seem like 2.2 and 3.0 are closer to each other than 2.1 and
> > 2.2 are, if you go purely by the feature list, but in fact the opposite
> is
> > true.
> >
>
> You are probably right. But let me push back on some of the extra work
> you're proposing just a little:
>
> 1) 2.0.x branch goes EOL when 3.0 is out, as planned
> >
>
> 3.0 was, however unrealistically, planned for April. And it's moving the
> goalposts to say the plan was always to keep 2.0.x for three major
> releases; the plan was to EOL with "the next major release after 2.1"
> whether that was called 3.0 or not. So I think EOLing 2.0.x when 2.2 comes
> out is reasonable, especially considering that 2.2 is realistically a month
> or two away even if we can get a beta out this week.
>
> 2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new
> > storage engine
> >
>
> Yes.
>
>
> > 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade to
> > 2.2, get the same stability as with 2.1.7, plus a few new features
> >
>
> If push comes to shove I'm okay being ambiguous here, but can we just say
> "when 3.0 is released we EOL 2.1?"
>
> P.S. The area I'm most concerned about introducing destabilizing changes in
> 2.2 is commitlog; I will follow up to make sure we have a solid QA plan
> there.
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jonathan Ellis
Unresolved issues tagged for 2.2b1:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20fixVersion%20%3D%20%222.2%20beta%201%22%20AND%20resolution%20%3D%20Unresolved%20ORDER%20BY%20due%20ASC%2C%20priority%20DESC%2C%20created%20ASC

On Mon, May 11, 2015 at 2:42 PM, Jonathan Ellis  wrote:

> Sounds good.  I will add the new version to Jira.
>
> Planned tickets to block 2.2 beta for:
>
> #8374
> #8984
> #9190
>
> Any others?  (If it's not code complete today we should not block for it.)
>
>
> On Mon, May 11, 2015 at 1:59 PM, Aleksey Yeschenko 
> wrote:
>
>> > So I think EOLing 2.0.x when 2.2 comes
>> > out is reasonable, especially considering that 2.2 is realistically a
>> month
>> > or two away even if we can get a beta out this week.
>>
>> Given how long 2.0.x has been alive now, and the stability of 2.1.x at
>> the moment, I’d say it’s fair enough to EOL 2.0 as soon as 2.2 gets out.
>> Can’t argue here.
>>
>> > If push comes to shove I'm okay being ambiguous here, but can we just
>> say
>> > "when 3.0 is released we EOL 2.1?"
>>
>> Under our current projections, that’ll be exactly “a few months after 2.2
>> is released”, so I’m again fine with it.
>>
>> > P.S. The area I'm most concerned about introducing destabilizing
>> changes in
>> > 2.2 is commitlog
>>
>> So long as you don’t you compressed CL, you should be solid. You are
>> probably solid even if you do use compressed CL.
>>
>> Here are my only concerns:
>>
>> 1. New authz are not opt-in. If a user implements their own custom
>> authenticator or authorized, they’d have to upgrade them sooner. The test
>> coverage for new authnz, however, is better than the coverage we used to
>> have before.
>>
>> 2. CQL2 is gone from 2.2. Might force those who use it migrate faster. In
>> practice, however, I highly doubt that anybody using CQL2 is also someone
>> who’d already switch to 2.1.x or 2.2.x.
>>
>>
>> --
>> AY
>>
>> On May 11, 2015 at 21:12:26, Jonathan Ellis (jbel...@gmail.com) wrote:
>>
>> On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko 
>> wrote:
>>
>> > 3.0, however, will require a stabilisation period, just by the nature of
>> > it. It might seem like 2.2 and 3.0 are closer to each other than 2.1 and
>> > 2.2 are, if you go purely by the feature list, but in fact the opposite
>> is
>> > true.
>> >
>>
>> You are probably right. But let me push back on some of the extra work
>> you're proposing just a little:
>>
>> 1) 2.0.x branch goes EOL when 3.0 is out, as planned
>> >
>>
>> 3.0 was, however unrealistically, planned for April. And it's moving the
>> goalposts to say the plan was always to keep 2.0.x for three major
>> releases; the plan was to EOL with "the next major release after 2.1"
>> whether that was called 3.0 or not. So I think EOLing 2.0.x when 2.2 comes
>> out is reasonable, especially considering that 2.2 is realistically a
>> month
>> or two away even if we can get a beta out this week.
>>
>> 2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new
>> > storage engine
>> >
>>
>> Yes.
>>
>>
>> > 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade to
>> > 2.2, get the same stability as with 2.1.7, plus a few new features
>> >
>>
>> If push comes to shove I'm okay being ambiguous here, but can we just say
>> "when 3.0 is released we EOL 2.1?"
>>
>> P.S. The area I'm most concerned about introducing destabilizing changes
>> in
>> 2.2 is commitlog; I will follow up to make sure we have a solid QA plan
>> there.
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder, http://www.datastax.com
>> @spyced
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jeremiah Jordan
Cassandra-jdbc can do cql3 as well as cql2. The rub (and why I would never 
recommend it) is that it does cql3 over thrift. So you lose out on all the 
native protocol features.



> On May 11, 2015, at 2:53 PM, Brian Hess  wrote:
> 
> One thing that does jump out at me, though, is about CQL2.  As much as we
> have advised against using cassandra-jdbc, I have encountered a few that
> actually have used that as an integration point.  I believe that
> cassandra-jdbc is CQL2-based, which is the main reason we have been
> advising folks against it.
> 
> Can we just confirm that there isn't in fact widespread use of CQL2-based
> cassandra-jdbc?  That just jumps out at me.
> 
> On Mon, May 11, 2015 at 2:59 PM, Aleksey Yeschenko 
> wrote:
> 
>>> So I think EOLing 2.0.x when 2.2 comes
>>> out is reasonable, especially considering that 2.2 is realistically a
>> month
>>> or two away even if we can get a beta out this week.
>> 
>> Given how long 2.0.x has been alive now, and the stability of 2.1.x at the
>> moment, I’d say it’s fair enough to EOL 2.0 as soon as 2.2 gets out. Can’t
>> argue here.
>> 
>>> If push comes to shove I'm okay being ambiguous here, but can we just
>> say
>>> "when 3.0 is released we EOL 2.1?"
>> 
>> Under our current projections, that’ll be exactly “a few months after 2.2
>> is released”, so I’m again fine with it.
>> 
>>> P.S. The area I'm most concerned about introducing destabilizing changes
>> in
>>> 2.2 is commitlog
>> 
>> So long as you don’t you compressed CL, you should be solid. You are
>> probably solid even if you do use compressed CL.
>> 
>> Here are my only concerns:
>> 
>> 1. New authz are not opt-in. If a user implements their own custom
>> authenticator or authorized, they’d have to upgrade them sooner. The test
>> coverage for new authnz, however, is better than the coverage we used to
>> have before.
>> 
>> 2. CQL2 is gone from 2.2. Might force those who use it migrate faster. In
>> practice, however, I highly doubt that anybody using CQL2 is also someone
>> who’d already switch to 2.1.x or 2.2.x.
>> 
>> 
>> --
>> AY
>> 
>> On May 11, 2015 at 21:12:26, Jonathan Ellis (jbel...@gmail.com) wrote:
>> 
>> On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko 
>> wrote:
>> 
>>> 3.0, however, will require a stabilisation period, just by the nature of
>>> it. It might seem like 2.2 and 3.0 are closer to each other than 2.1 and
>>> 2.2 are, if you go purely by the feature list, but in fact the opposite
>> is
>>> true.
>> 
>> You are probably right. But let me push back on some of the extra work
>> you're proposing just a little:
>> 
>> 1) 2.0.x branch goes EOL when 3.0 is out, as planned
>> 
>> 3.0 was, however unrealistically, planned for April. And it's moving the
>> goalposts to say the plan was always to keep 2.0.x for three major
>> releases; the plan was to EOL with "the next major release after 2.1"
>> whether that was called 3.0 or not. So I think EOLing 2.0.x when 2.2 comes
>> out is reasonable, especially considering that 2.2 is realistically a month
>> or two away even if we can get a beta out this week.
>> 
>> 2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new
>>> storage engine
>> 
>> Yes.
>> 
>> 
>>> 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade to
>>> 2.2, get the same stability as with 2.1.7, plus a few new features
>> 
>> If push comes to shove I'm okay being ambiguous here, but can we just say
>> "when 3.0 is released we EOL 2.1?"
>> 
>> P.S. The area I'm most concerned about introducing destabilizing changes in
>> 2.2 is commitlog; I will follow up to make sure we have a solid QA plan
>> there.
>> 
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder, http://www.datastax.com
>> @spyced
>> 


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Alex Popescu
On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:

> Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so
> basically just move 8099 to 3.1).
> In the end it’s ”only a label”. But there are a lot of new user-facing
> features in it that justifies a major release.
>

+1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)

1. Tons of new features that feel more than just a 2.2
2. The majority of features planned for 3.0 are actually ready for this
version
3. in order to avoid compatiblity questions (and version compatibility
matrices), the drivers developed by DataStax have
followed the Cassandra versions so far. The Python and C# drivers are
already at 2.5 as they added some major features.

   Renaming the proposed 2.2 as 3.0 would allow us to continue to use this
versioning policy until all drivers are supporting
   the latest Cassandra version and continue to not require a user to check
a compatibility matrix.


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Aleksey Yeschenko
The drivers, actually, aren’t ready at all for 3.0 with 8099, because 6717 will 
be pushed shortly after 8099, and break things.

-- 
AY

On May 11, 2015 at 23:29:13, Alex Popescu (al...@datastax.com) wrote:

On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:  

> Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so  
> basically just move 8099 to 3.1).  
> In the end it’s ”only a label”. But there are a lot of new user-facing  
> features in it that justifies a major release.  
>  

+1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)  

1. Tons of new features that feel more than just a 2.2  
2. The majority of features planned for 3.0 are actually ready for this  
version  
3. in order to avoid compatiblity questions (and version compatibility  
matrices), the drivers developed by DataStax have  
followed the Cassandra versions so far. The Python and C# drivers are  
already at 2.5 as they added some major features.  

Renaming the proposed 2.2 as 3.0 would allow us to continue to use this  
versioning policy until all drivers are supporting  
the latest Cassandra version and continue to not require a user to check  
a compatibility matrix.  


--  
Bests,  

Alex Popescu | @al3xandru  
Sen. Product Manager @ DataStax  


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Brian Hess
Jeremiah - still need to worry about whether folks are doing CQL2 or CQL3
over cassandra-jdbc.

If it is not in much use, that's fine by me.  I just wanted to raise one
place where folks might be using CQL2 without realizing it.

On Mon, May 11, 2015 at 4:00 PM, Jeremiah Jordan 
wrote:

> Cassandra-jdbc can do cql3 as well as cql2. The rub (and why I would never
> recommend it) is that it does cql3 over thrift. So you lose out on all the
> native protocol features.
>
>
>
> > On May 11, 2015, at 2:53 PM, Brian Hess  wrote:
> >
> > One thing that does jump out at me, though, is about CQL2.  As much as we
> > have advised against using cassandra-jdbc, I have encountered a few that
> > actually have used that as an integration point.  I believe that
> > cassandra-jdbc is CQL2-based, which is the main reason we have been
> > advising folks against it.
> >
> > Can we just confirm that there isn't in fact widespread use of CQL2-based
> > cassandra-jdbc?  That just jumps out at me.
> >
> > On Mon, May 11, 2015 at 2:59 PM, Aleksey Yeschenko 
> > wrote:
> >
> >>> So I think EOLing 2.0.x when 2.2 comes
> >>> out is reasonable, especially considering that 2.2 is realistically a
> >> month
> >>> or two away even if we can get a beta out this week.
> >>
> >> Given how long 2.0.x has been alive now, and the stability of 2.1.x at
> the
> >> moment, I’d say it’s fair enough to EOL 2.0 as soon as 2.2 gets out.
> Can’t
> >> argue here.
> >>
> >>> If push comes to shove I'm okay being ambiguous here, but can we just
> >> say
> >>> "when 3.0 is released we EOL 2.1?"
> >>
> >> Under our current projections, that’ll be exactly “a few months after
> 2.2
> >> is released”, so I’m again fine with it.
> >>
> >>> P.S. The area I'm most concerned about introducing destabilizing
> changes
> >> in
> >>> 2.2 is commitlog
> >>
> >> So long as you don’t you compressed CL, you should be solid. You are
> >> probably solid even if you do use compressed CL.
> >>
> >> Here are my only concerns:
> >>
> >> 1. New authz are not opt-in. If a user implements their own custom
> >> authenticator or authorized, they’d have to upgrade them sooner. The
> test
> >> coverage for new authnz, however, is better than the coverage we used to
> >> have before.
> >>
> >> 2. CQL2 is gone from 2.2. Might force those who use it migrate faster.
> In
> >> practice, however, I highly doubt that anybody using CQL2 is also
> someone
> >> who’d already switch to 2.1.x or 2.2.x.
> >>
> >>
> >> --
> >> AY
> >>
> >> On May 11, 2015 at 21:12:26, Jonathan Ellis (jbel...@gmail.com) wrote:
> >>
> >> On Sun, May 10, 2015 at 2:42 PM, Aleksey Yeschenko 
> >> wrote:
> >>
> >>> 3.0, however, will require a stabilisation period, just by the nature
> of
> >>> it. It might seem like 2.2 and 3.0 are closer to each other than 2.1
> and
> >>> 2.2 are, if you go purely by the feature list, but in fact the opposite
> >> is
> >>> true.
> >>
> >> You are probably right. But let me push back on some of the extra work
> >> you're proposing just a little:
> >>
> >> 1) 2.0.x branch goes EOL when 3.0 is out, as planned
> >>
> >> 3.0 was, however unrealistically, planned for April. And it's moving the
> >> goalposts to say the plan was always to keep 2.0.x for three major
> >> releases; the plan was to EOL with "the next major release after 2.1"
> >> whether that was called 3.0 or not. So I think EOLing 2.0.x when 2.2
> comes
> >> out is reasonable, especially considering that 2.2 is realistically a
> month
> >> or two away even if we can get a beta out this week.
> >>
> >> 2) 3.0.x LTS branch stays, as planned, and helps us stabilise the new
> >>> storage engine
> >>
> >> Yes.
> >>
> >>
> >>> 3) in a few months after 2.2 gets released, we EOL 2.1. Users upgrade
> to
> >>> 2.2, get the same stability as with 2.1.7, plus a few new features
> >>
> >> If push comes to shove I'm okay being ambiguous here, but can we just
> say
> >> "when 3.0 is released we EOL 2.1?"
> >>
> >> P.S. The area I'm most concerned about introducing destabilizing
> changes in
> >> 2.2 is commitlog; I will follow up to make sure we have a solid QA plan
> >> there.
> >>
> >> --
> >> Jonathan Ellis
> >> Project Chair, Apache Cassandra
> >> co-founder, http://www.datastax.com
> >> @spyced
> >>
>


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Alex Popescu
On Mon, May 11, 2015 at 1:32 PM, Aleksey Yeschenko 
wrote:

> The drivers, actually, aren’t ready at all for 3.0 with 8099, because 6717
> will be pushed shortly after 8099, and break things.


Apologies, I didn't mean they are ready today. Version-wise, renaming this
proposed 2.2 to 3.0 would allow us to maintain a
versioning policy that made things quite simple for users: Cassandra
version == driver version.


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jake Luciani
Overall +1.

I'm -0 on EOL of 2.0 once 2.2 is release. I'd rather keep 2.0 around
till 3.0 comes out.

As for 2.2 blockers, we might want to vet and make sure everything we
need in protocol v4 is finished before we release 2.2
https://issues.apache.org/jira/browse/CASSANDRA-8043


On Sat, May 9, 2015 at 6:38 PM, Jonathan Ellis  wrote:
> *With 8099 still weeks from being code complete, and even longer from being
> stable, I’m starting to think we should decouple everything that’s already
> done in trunk from 8099.  That is, ship 2.2 ASAP with - Windows support-
> UDF- Role-based permissions - JSON- Compressed commitlog- Off-heap row
> cache- Message coalescing on by default- Native protocol v4and let 3.0 ship
> with 8099 and a few things that finish by then (vnode compaction,
> file-based hints, maybe materialized views).Remember that we had 7 release
> candidates for 2.1.  Splitting 2.2 and 3.0 up this way will reduce the risk
> in both 2.2 and 3.0 by separating most of the new features from the big
> engine change.  We might still have a lot of stabilization to do for either
> or both, but at the least this lets us get a head start on testing the new
> features in 2.2.This does introduce a new complication, which is that
> instead of 3.0 being an unusually long time after 2.1, it will be an
> unusually short time after 2.2.  The “default” if we follow established
> practice would be to*
>
>-
>
>EOL 2.1 when 3.0 ships, and maintain 2.2.x and 3.0.x stabilization
>branches
>
>
> *But, this is probably not the best investment we could make for our users
> since 2.2 and 3.0 are relatively close in functionality.  I see a couple
> other options without jumping to 3 concurrent stabilization series:*
>
>
>
> * - Extend 2.1.x series and 2.2.x until 4.0, but skip 3.0.x stabilization
> series in favor of tick-tock 3.x- Extend 2.1.x series until 4.0, but stop
> 2.2.x when 3.0 ships in favor of developing 3.0.x insteadThoughts?*
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced



-- 
http://twitter.com/tjake


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Aleksey Yeschenko
Can you not alter the version of the drivers, though? 3.0 to 2.2?

-- 
AY

On May 11, 2015 at 23:38:00, Alex Popescu (al...@datastax.com) wrote:

On Mon, May 11, 2015 at 1:32 PM, Aleksey Yeschenko   
wrote:  

> The drivers, actually, aren’t ready at all for 3.0 with 8099, because 6717  
> will be pushed shortly after 8099, and break things.  


Apologies, I didn't mean they are ready today. Version-wise, renaming this  
proposed 2.2 to 3.0 would allow us to maintain a  
versioning policy that made things quite simple for users: Cassandra  
version == driver version.  


--  
Bests,  

Alex Popescu | @al3xandru  
Sen. Product Manager @ DataStax  


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Alex Popescu
On Mon, May 11, 2015 at 1:41 PM, Aleksey Yeschenko 
wrote:

> 3.0 to 2.2?


Python and C# have already used 2.5 (I wouldn't have brought this up if I
had other options).


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jonathan Ellis
I do like 2.2 and 3.0 over 3.0 and 3.1 because going from 2.x to 3.x
signals that 8099 really is a big change.

On Mon, May 11, 2015 at 3:28 PM, Alex Popescu  wrote:

> On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:
>
> > Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so
> > basically just move 8099 to 3.1).
> > In the end it’s ”only a label”. But there are a lot of new user-facing
> > features in it that justifies a major release.
> >
>
> +1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)
>
> 1. Tons of new features that feel more than just a 2.2
> 2. The majority of features planned for 3.0 are actually ready for this
> version
> 3. in order to avoid compatiblity questions (and version compatibility
> matrices), the drivers developed by DataStax have
> followed the Cassandra versions so far. The Python and C# drivers are
> already at 2.5 as they added some major features.
>
>Renaming the proposed 2.2 as 3.0 would allow us to continue to use this
> versioning policy until all drivers are supporting
>the latest Cassandra version and continue to not require a user to check
> a compatibility matrix.
>
>
> --
> Bests,
>
> Alex Popescu | @al3xandru
> Sen. Product Manager @ DataStax
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Jonathan Haddad
I'm not sure if the complications surrounding the versioning of the drivers
should be factored into the releases of Cassandra.  I think that 3.0
signals a massive change and calling the release containing 8099 a .1 would
be drastically underplaying how big of a release it is - from the
perspective of the end user it would be a disservice.


On Mon, May 11, 2015 at 2:09 PM Jonathan Ellis  wrote:

> I do like 2.2 and 3.0 over 3.0 and 3.1 because going from 2.x to 3.x
> signals that 8099 really is a big change.
>
> On Mon, May 11, 2015 at 3:28 PM, Alex Popescu  wrote:
>
> > On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:
> >
> > > Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so
> > > basically just move 8099 to 3.1).
> > > In the end it’s ”only a label”. But there are a lot of new user-facing
> > > features in it that justifies a major release.
> > >
> >
> > +1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)
> >
> > 1. Tons of new features that feel more than just a 2.2
> > 2. The majority of features planned for 3.0 are actually ready for this
> > version
> > 3. in order to avoid compatiblity questions (and version compatibility
> > matrices), the drivers developed by DataStax have
> > followed the Cassandra versions so far. The Python and C# drivers are
> > already at 2.5 as they added some major features.
> >
> >Renaming the proposed 2.2 as 3.0 would allow us to continue to use
> this
> > versioning policy until all drivers are supporting
> >the latest Cassandra version and continue to not require a user to
> check
> > a compatibility matrix.
> >
> >
> > --
> > Bests,
> >
> > Alex Popescu | @al3xandru
> > Sen. Product Manager @ DataStax
> >
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Alex Popescu
Another option could be 2.1 -> 2.5* -> 3.0

This would still emphasize the major new features and changes in both
versions.

(*) unfortunately 2.5 would not help for drivers, so labeling 2.6 would get
my +1.

On Mon, May 11, 2015 at 2:09 PM, Jonathan Ellis  wrote:

> I do like 2.2 and 3.0 over 3.0 and 3.1 because going from 2.x to 3.x
> signals that 8099 really is a big change.
>
> On Mon, May 11, 2015 at 3:28 PM, Alex Popescu  wrote:
>
> > On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:
> >
> > > Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so
> > > basically just move 8099 to 3.1).
> > > In the end it’s ”only a label”. But there are a lot of new user-facing
> > > features in it that justifies a major release.
> > >
> >
> > +1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)
> >
> > 1. Tons of new features that feel more than just a 2.2
> > 2. The majority of features planned for 3.0 are actually ready for this
> > version
> > 3. in order to avoid compatiblity questions (and version compatibility
> > matrices), the drivers developed by DataStax have
> > followed the Cassandra versions so far. The Python and C# drivers are
> > already at 2.5 as they added some major features.
> >
> >Renaming the proposed 2.2 as 3.0 would allow us to continue to use
> this
> > versioning policy until all drivers are supporting
> >the latest Cassandra version and continue to not require a user to
> check
> > a compatibility matrix.
> >
> >
> > --
> > Bests,
> >
> > Alex Popescu | @al3xandru
> > Sen. Product Manager @ DataStax
> >
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>



-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Alex Popescu
On Mon, May 11, 2015 at 2:16 PM, Jonathan Haddad  wrote:

> I'm not sure if the complications surrounding the versioning of the drivers
> should be factored into the releases of Cassandra.


I agree. If we could come up with a versioning scheme that would also work
for drivers, that would be
the ideal case as it will prove quite helpful to our users.


> I think that 3.0
> signals a massive change and calling the release containing 8099 a .1 would
> be drastically underplaying how big of a release it is - from the
> perspective of the end user it would be a disservice.
>
>
I see. My last suggestion could work though as it signals both releases
having significant impact.



>
> On Mon, May 11, 2015 at 2:09 PM Jonathan Ellis  wrote:
>
> > I do like 2.2 and 3.0 over 3.0 and 3.1 because going from 2.x to 3.x
> > signals that 8099 really is a big change.
> >
> > On Mon, May 11, 2015 at 3:28 PM, Alex Popescu 
> wrote:
> >
> > > On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:
> > >
> > > > Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so
> > > > basically just move 8099 to 3.1).
> > > > In the end it’s ”only a label”. But there are a lot of new
> user-facing
> > > > features in it that justifies a major release.
> > > >
> > >
> > > +1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)
> > >
> > > 1. Tons of new features that feel more than just a 2.2
> > > 2. The majority of features planned for 3.0 are actually ready for this
> > > version
> > > 3. in order to avoid compatiblity questions (and version compatibility
> > > matrices), the drivers developed by DataStax have
> > > followed the Cassandra versions so far. The Python and C# drivers
> are
> > > already at 2.5 as they added some major features.
> > >
> > >Renaming the proposed 2.2 as 3.0 would allow us to continue to use
> > this
> > > versioning policy until all drivers are supporting
> > >the latest Cassandra version and continue to not require a user to
> > check
> > > a compatibility matrix.
> > >
> > >
> > > --
> > > Bests,
> > >
> > > Alex Popescu | @al3xandru
> > > Sen. Product Manager @ DataStax
> > >
> >
> >
> >
> > --
> > Jonathan Ellis
> > Project Chair, Apache Cassandra
> > co-founder, http://www.datastax.com
> > @spyced
> >
>



-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax


Re: Proposal: release 2.2 (based on current trunk) before 3.0 (based on 8099)

2015-05-11 Thread Michael Kjellman
Last I checked — and I could be wrong — we’ve never had to think about what to 
number a Cassandra version due to a ticket that could “impact” our users so 
dramatically due to the scope of the changes from a single ticket. Food for 
thought.

love,
kjellman

> On May 11, 2015, at 2:20 PM, Alex Popescu  wrote:
> 
> On Mon, May 11, 2015 at 2:16 PM, Jonathan Haddad  wrote:
> 
>> I'm not sure if the complications surrounding the versioning of the drivers
>> should be factored into the releases of Cassandra.
> 
> 
> I agree. If we could come up with a versioning scheme that would also work
> for drivers, that would be
> the ideal case as it will prove quite helpful to our users.
> 
> 
>> I think that 3.0
>> signals a massive change and calling the release containing 8099 a .1 would
>> be drastically underplaying how big of a release it is - from the
>> perspective of the end user it would be a disservice.
>> 
>> 
> I see. My last suggestion could work though as it signals both releases
> having significant impact.
> 
> 
> 
>> 
>> On Mon, May 11, 2015 at 2:09 PM Jonathan Ellis  wrote:
>> 
>>> I do like 2.2 and 3.0 over 3.0 and 3.1 because going from 2.x to 3.x
>>> signals that 8099 really is a big change.
>>> 
>>> On Mon, May 11, 2015 at 3:28 PM, Alex Popescu 
>> wrote:
>>> 
 On Sun, May 10, 2015 at 2:14 PM, Robert Stupp  wrote:
 
> Instead of labeling it 2.2, I’d like to propose to label it 3.0 (so
> basically just move 8099 to 3.1).
> In the end it’s ”only a label”. But there are a lot of new
>> user-facing
> features in it that justifies a major release.
> 
 
 +1 on labeling the proposed 2.2 as 3.0 and moving (8099 to 3.1)
 
 1. Tons of new features that feel more than just a 2.2
 2. The majority of features planned for 3.0 are actually ready for this
 version
 3. in order to avoid compatiblity questions (and version compatibility
 matrices), the drivers developed by DataStax have
followed the Cassandra versions so far. The Python and C# drivers
>> are
 already at 2.5 as they added some major features.
 
   Renaming the proposed 2.2 as 3.0 would allow us to continue to use
>>> this
 versioning policy until all drivers are supporting
   the latest Cassandra version and continue to not require a user to
>>> check
 a compatibility matrix.
 
 
 --
 Bests,
 
 Alex Popescu | @al3xandru
 Sen. Product Manager @ DataStax
 
>>> 
>>> 
>>> 
>>> --
>>> Jonathan Ellis
>>> Project Chair, Apache Cassandra
>>> co-founder, http://www.datastax.com
>>> @spyced
>>> 
>> 
> 
> 
> 
> -- 
> Bests,
> 
> Alex Popescu | @al3xandru
> Sen. Product Manager @ DataStax