Cassandra 1.2 & Compressed Data

2016-01-11 Thread Ken Hancock
in Cassandra 1.2.18 for compressed data become unbounded and can consume as much heap space as compressed data is read. Seaching Jira, I found https://issues.apache.org/jira/browse/CASSANDRA-5661 which sounds like the fix effectively orphaned Cassandra 1.2: "Reader pooling was introduced

Re: Cassandra 1.2.x EOL date

2015-05-28 Thread Robert Coli
On Wed, May 27, 2015 at 5:10 PM, Jason Unovitch wrote: > Simple and quick question, can anyone point me to where the Cassandra > 1.2.x series EOL date was announced? I see archived mailing list > threads for 1.2.19 mentioning it was going to be the last release and > I see CVE-2015-

Cassandra 1.2.x EOL date

2015-05-27 Thread Jason Unovitch
Evening list, Simple and quick question, can anyone point me to where the Cassandra 1.2.x series EOL date was announced? I see archived mailing list threads for 1.2.19 mentioning it was going to be the last release and I see CVE-2015-0225 mention it is EOL. I didn't see it say when the off

Re: Issue restarting cassandra with a cluster running Cassandra 1.2.x and Cassandra 2.0.x

2015-03-04 Thread Fabrice Facorat
Upgrade a node from 1.2.13 to 2.0.10 works correctly and we did run upgradesstable on the new 2.0.x node. The issue lies with the others nodes still running Cassandra 1.2.x which failed to start if you did just a restart of the node. Here is the describecluster output during the upgrade

Re: Composite Keys in cassandra 1.2

2015-03-03 Thread Kai Wang
This is a tough one. One thing I can think of is to use Spark/Spark SQL to run ad-hoc queries on C* cluster. You can post on "Spark Cassandra Connector" user group. On Tue, Mar 3, 2015 at 10:18 AM, Yulian Oifa wrote: > Hello > Initially problem is that customer wants to have an option for ANY qu

Re: Issue restarting cassandra with a cluster running Cassandra 1.2.x and Cassandra 2.0.x

2015-03-03 Thread Tobias Hauth
o...@gmail.com> wrote: > >> Hi, >> >> we have a 52 Cassandra nodes cluster running Apache Cassandra 1.2.13. >> As we are planning to migrate to Cassandra 2.0.10, we decide to do >> some tests and we noticed that once a node in the cluster have been >> upgraded

Re: Issue restarting cassandra with a cluster running Cassandra 1.2.x and Cassandra 2.0.x

2015-03-03 Thread Nate McCall
nce a node in the cluster have been > upgraded to Cassandra 2.0.x, restarting a Cassandra 1.2.x will fail. > > The tests were done on a 6 nodes cluster running Apache Cassandra > 1.2.13 (x5) + Apache Cassandra 2.0.10 (x1) and using java 1.7.0_07. > The cassandra 1.2.x is failing with

Issue restarting cassandra with a cluster running Cassandra 1.2.x and Cassandra 2.0.x

2015-03-03 Thread Fabrice Facorat
Hi, we have a 52 Cassandra nodes cluster running Apache Cassandra 1.2.13. As we are planning to migrate to Cassandra 2.0.10, we decide to do some tests and we noticed that once a node in the cluster have been upgraded to Cassandra 2.0.x, restarting a Cassandra 1.2.x will fail. The tests were

Re: Composite Keys in cassandra 1.2

2015-03-03 Thread Yulian Oifa
Hello Initially problem is that customer wants to have an option for ANY query , which does not fits good with NOSQL.However the size of data is too big for Relational DB. There are no typical queries on the data, there are 10 fields , based on which ( any mix of them also ) queries should be made.

Re: Composite Keys in cassandra 1.2

2015-03-02 Thread Kai Wang
AFIK it's not possible. The fact you need to query the data by partial row key indicates your data model isn't proper. What are your typical queries on the data? On Sun, Mar 1, 2015 at 7:24 AM, Yulian Oifa wrote: > Hello to all. > Lets assume a scenario where key is compound type with 3 types in

Composite Keys in cassandra 1.2

2015-03-01 Thread Yulian Oifa
Hello to all. Lets assume a scenario where key is compound type with 3 types in it ( Long , UTF8, UTF8 ). Each row stores timeuuids as column names and empty values. Is it possible to retreive data by single key part ( for example by long only ) by using java thrift? Best regards Yulian Oifa

Is Cassandra-jdbc-1.2.5 compatible with Cassandra 1.2,x version?

2014-07-02 Thread Harsha Kumara
Hi all, Can I know $subject? Thanks, Harsha -- *Harsha Kumara* *Software Engineer* *WSO2 Inc.* *Sri Lanka.*

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-25 Thread Jonathan Lacefield
gt;> >> Hello, >>> >>> You have several options: >>> >>> 1) going forward lower gc_grace_seconds http://www.datastax.com/ >>> documentation/cassandra/1.2/cassandra/configuration/ >>> configStorage_r.html?pagename=docs&version=1.2&file=

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-25 Thread Oleg Dulin
collected over 100Gigs of tombstones and seems much happier now. Oleg On 2014-03-10 13:33:43 +, Jonathan Lacefield said: Hello,   You have several options:   1) going forward lower gc_grace_seconds http://www.datastax.com/documentation/cassandra/1.2/cassandra/configuration/configStorage_r.html

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-11 Thread Oleg Dulin
/documentation/cassandra/1.2/cassandra/configuration/configStorage_r.html?pagename=docs&version=1.2&file=configuration/storage_configuration#gc-grace-seconds        - this is very use case specific.  Default is 10 days.  Some users will put this at 0 for specific use cases.   2) you could al

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-11 Thread Takenori Sato
In addition to the suggestions by Jonathan, you can run a user defined compaction against a particular set of SSTable files, where you want to remove tombstones. But to do that, you need to find such an optimal set. Here you can find a couple of helpful tools. https://github.com/cloudian/support-

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-10 Thread Keith Wright
Date: Monday, March 10, 2014 at 8:33 AM To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" mailto:user@cassandra.apache.org>> Subject: Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram Hello, You have several options: 1) going forwa

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-10 Thread Jonathan Lacefield
Hello, You have several options: 1) going forward lower gc_grace_seconds http://www.datastax.com/documentation/cassandra/1.2/cassandra/configuration/configStorage_r.html?pagename=docs&version=1.2&file=configuration/storage_configuration#gc-grace-seconds - this is very

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-10 Thread Oleg Dulin
I get that :) What I'd like to know is how to fix that :) On 2014-03-09 20:24:54 +, Takenori Sato said: You have millions of org.apache.cassandra.db.DeletedColumn instances on the snapshot. This means you have lots of column tombstones, and I guess, which are read into memory by slice q

Re: need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-09 Thread Takenori Sato
You have millions of org.apache.cassandra.db.DeletedColumn instances on the snapshot. This means you have lots of column tombstones, and I guess, which are read into memory by slice query. On Sun, Mar 9, 2014 at 10:55 PM, Oleg Dulin wrote: > I am trying to understand why one of my nodes keeps

need help with Cassandra 1.2 Full GCing -- output of jmap histogram

2014-03-09 Thread Oleg Dulin
I am trying to understand why one of my nodes keeps full GC. I have Xmx set to 8gigs, memtable total size is 2 gigs. Consider the top entries from jmap -histo:live @ http://pastebin.com/UaatHfpJ -- Regards, Oleg Dulin http://www.olegdulin.com

Re: Cassandra 1.2 : OutOfMemoryError: unable to create new native thread

2013-12-18 Thread Oleg Dulin
I figured it out. Another process on that machine was leaking threads. All is well! Thanks guys! Oleg On 2013-12-16 13:48:39 +, Maciej Miklas said: the cassandra-env.sh has option JVM_OPTS="$JVM_OPTS -Xss180k" it will give this error if you start cassandra with java 7. So increase the

Re: Cassandra 1.2 : OutOfMemoryError: unable to create new native thread

2013-12-17 Thread Aaron Morton
Try using jstack to see if there are a lot of threads there. Are you using vNodea and Hadoop ? https://issues.apache.org/jira/browse/CASSANDRA-6169 Cheers - Aaron Morton New Zealand @aaronmorton Co-Founder & Principal Consultant Apache Cassandra Consulting http://www.thelas

Re: Cassandra 1.2 : OutOfMemoryError: unable to create new native thread

2013-12-16 Thread Maciej Miklas
the cassandra-env.sh has option JVM_OPTS="$JVM_OPTS -Xss180k" it will give this error if you start cassandra with java 7. So increase the value, or remove option. Regards, Maciej On Mon, Dec 16, 2013 at 2:37 PM, srmore wrote: > What is your thread stack size (xss) ? try increasing that, that

Re: Cassandra 1.2 : OutOfMemoryError: unable to create new native thread

2013-12-16 Thread srmore
What is your thread stack size (xss) ? try increasing that, that could help. Sometimes the limitation is imposed by the host provider (e.g. amazon ec2 etc.) Thanks, Sandeep On Mon, Dec 16, 2013 at 6:53 AM, Oleg Dulin wrote: > Hi guys! > > I beleive my limits settings are correct. Here is the o

Cassandra 1.2 : OutOfMemoryError: unable to create new native thread

2013-12-16 Thread Oleg Dulin
Hi guys! I beleive my limits settings are correct. Here is the output of "ulimits -a": core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i)

Re: mixed linux/windows cluster in Cassandra-1.2

2013-10-21 Thread Edward Capriolo
We ran a Cassandra LAN party once with a mixed environment. http://www.datastax.com/dev/blog/cassandra-nyc-lan-party This was obviously a trivial setup. I think areas of concern would if you have column families located on different devices and streaming related issues. It might work just fine how

Re: mixed linux/windows cluster in Cassandra-1.2

2013-10-21 Thread Илья Шипицин
We want to migrate hundred gigabytes cluster from winows to linux without operation interruption. I.e. node by node. вторник, 22 октября 2013 г. пользователь Jon Haddad писал: > I can't imagine any situation where this would be practical. What would > be the reason to even consider this? > > On

Re: mixed linux/windows cluster in Cassandra-1.2

2013-10-21 Thread Илья Шипицин
Technical reason is "path separator", which is different on linux and windows. If you would search through maling list, you would have found evidence it does not work and it is not supported. But, the most recent notice I have found was about 0.7 and there was no jira bug number. Just "unsupported

Re: mixed linux/windows cluster in Cassandra-1.2

2013-10-21 Thread Jon Haddad
I can't imagine any situation where this would be practical. What would be the reason to even consider this? On Oct 21, 2013, at 11:06 AM, Robert Coli wrote: > On Mon, Oct 21, 2013 at 12:55 AM, Илья Шипицин wrote: > is mixed linux/windows cluster configuration supported in 1.2 ? > > I don't

Re: mixed linux/windows cluster in Cassandra-1.2

2013-10-21 Thread Robert Coli
On Mon, Oct 21, 2013 at 12:55 AM, Илья Шипицин wrote: > is mixed linux/windows cluster configuration supported in 1.2 ? > I don't think it's officially supported in any version; you would be among a very small number of people operating in this way. However there is no technical reason it "shoul

mixed linux/windows cluster in Cassandra-1.2

2013-10-21 Thread Илья Шипицин
Hello! is mixed linux/windows cluster configuration supported in 1.2 ? Cheers, Ilya Shipitsin

Re: Cassandra 1.2: old node does not want to re-join the ring

2013-09-23 Thread Robert Coli
On Mon, Aug 26, 2013 at 5:39 AM, Denis Kot wrote: > Please help. We spent almost 3 days trying to fix it with no luck. > Did you ultimately succeed in this task? =Rob

Re: Cassandra 1.2: old node does not want to re-join the ring

2013-08-26 Thread Robert Coli
On Mon, Aug 26, 2013 at 5:39 AM, Denis Kot wrote: > 2) Stop gossip > > 3) Stop thrift > 4) Drain > 5) Stop Cassandra 6) Move all data to ebs (we using ephemeral volumes for > data) > 7) Stop / Start instance > 8) Move data back > 9) Start Cassandra > 10) stop cassandra 11) set auto_bootstrap:fals

Cassandra 1.2: old node does not want to re-join the ring

2013-08-26 Thread Denis Kot
Hello, We have Cassandra's cluster of 6 nodes, 3 seeds. One day AWS sent us a message that one of our instance will be decommissioned and this was seed01. To fix this we should simply stop/start instance to move it to new AWS host. Before stop/start we did: 2) Stop gossip 3) Stop thrift 4) Drain 5

Embedded Cassandra 1.2

2013-07-03 Thread Sávio Teles
We are using Cassandra 1.2 Embedded in a production environment. We are some issues with these lines: SocketAddress remoteSocket.get = socket (); assert socket! = null; ThriftClientState cState = activeSocketSessions.get (socket); The connection is maintained by remoteSocket thread

Re: Hadoop/Cassandra 1.2 timeouts

2013-06-26 Thread aaron morton
aland @aaronmorton http://www.thelastpickle.com On 25/06/2013, at 4:10 AM, Brian Jeltema wrote: > I'm having problems with Hadoop job failures on a Cassandra 1.2 cluster due > to > >Caused by: TimedOutException() >2013-06-24 11:29:11,

Hadoop/Cassandra 1.2 timeouts

2013-06-24 Thread Brian Jeltema
I'm having problems with Hadoop job failures on a Cassandra 1.2 cluster due to Caused by: TimedOutException() 2013-06-24 11:29:11,953 INFO Driver -at org.apache.cassandra.thrift.Cassandra$get_range_slices_result.read(Cassandra.java:12932) This is running on a 6-node cluste

RE: TTL can't be speciefied at column level using CQL 3 in Cassandra 1.2.x

2013-06-19 Thread Amresh Kumar Singh
m: Sylvain Lebresne [sylv...@datastax.com] Sent: Wednesday, June 19, 2013 3:45 PM To: user@cassandra.apache.org Subject: Re: TTL can't be speciefied at column level using CQL 3 in Cassandra 1.2.x Hi, > But CQL3 doesn't provide a way for this. That's not true. But the syntax is probably a

Re: TTL can't be speciefied at column level using CQL 3 in Cassandra 1.2.x

2013-06-19 Thread Sylvain Lebresne
Hi, > But CQL3 doesn't provide a way for this. That's not true. But the syntax is probably a bit more verbose than what you were hoping for. Your example (where I assume user_name is you partition key) can be achieved with: BEGIN BATCH UPDATE users SET password = 'aa' WHERE user_name='x

TTL can't be speciefied at column level using CQL 3 in Cassandra 1.2.x

2013-06-19 Thread Amresh Kumar Singh
Hi, Using Thrift, we are allowed to specify different TTL values for each columns in a row. But CQL3 doesn't provide a way for this. For instance, this is allowed: INSERT INTO users (user_name, password, gender, state) VALUES ('xamry2, 'aa', 'm', 'UP') using TTL 5; But something like

Re: Cassandra 1.2 TTL histogram problem

2013-05-23 Thread Yuki Morishita
> Are you sure that it is a good idea to estimate remainingKeys like that? Since we don't want to scan every row to check overlap and cause heavy IO automatically, the method can only do the best-effort type of calculation. In your case, try running user defined compaction on that sstable file. It

Re: Cassandra 1.2 TTL histogram problem

2013-05-22 Thread cem
Thanks for the answer. It means that if we use randompartioner it will be very difficult to find a sstable without any overlap. Let me give you an example from my test. I have ~50 sstables in total and an sstable with droppable ratio 0.9. I use GUID for key and only insert (no update -delete) s

Re: Cassandra 1.2 TTL histogram problem

2013-05-22 Thread Yuki Morishita
> Can method calculate non-overlapping keys as overlapping? Yes. And randomized keys don't matter here since sstables are sorted by "token" calculated from key by your partitioner, and the method uses sstable's min/max token to estimate overlap. On Tue, May 21, 2013 at 4:43 PM, cem wrote: > Than

Re: Cassandra 1.2 TTL histogram problem

2013-05-21 Thread cem
Thank you very much for the swift answer. I have one more question about the second part. Can method calculate non-overlapping keys as overlapping? I mean it uses max and min tokens and column count. They can be very close to each other if random keys are used. In my use case I generate a GUID fo

Re: Cassandra 1.2 TTL histogram problem

2013-05-21 Thread Yuki Morishita
> Why does Cassandra single table compaction skips the keys that are in the > other sstables? because we don't want to resurrect deleted columns. Say, sstable A has the column with timestamp 1, and sstable B has the same column which deleted at timestamp 2. Then if we purge that column only from

Cassandra 1.2 TTL histogram problem

2013-05-21 Thread cem
Hi all, I have a question about ticket https://issues.apache.org/jira/browse/CASSANDRA-3442 Why does Cassandra single table compaction skips the keys that are in the other sstables? Please correct if I am wrong. I also dont understand why we have this line in worthDroppingTombstones method: dou

Re: CDH4 + Cassandra 1.2 Integration Issue

2013-04-11 Thread aaron morton
cqlsh in cassandra 1.2 defaults to cql 3. - Aaron Morton Freelance Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 10/04/2013, at 6:55 PM, Gurminder Gill wrote: > Ah ha. So, the client defaults to CQL 2. Anyway of changing that? I ti

Re: CDH4 + Cassandra 1.2 Integration Issue

2013-04-09 Thread Gurminder Gill
Ah ha. So, the client defaults to CQL 2. Anyway of changing that? I tired libthrift 0.9 as well but it doesn't work. Thanks. On Tue, Apr 9, 2013 at 11:29 PM, Shamim wrote: > Hello, > if you created your table with cql then you have to add COMPACT > STORAGE as follows: > CREATE TABLE user (

Re: CDH4 + Cassandra 1.2 Integration Issue

2013-04-09 Thread Shamim
Hello,   if you created your table with cql then you have to add COMPACT STORAGE as follows: CREATE TABLE user (   id int PRIMARY KEY,   age int,   fname text,   lname text ) WITH COMPACT STORAGE -- Best regards   Shamim A. 10.04.2013, 08:22, "Gurminder Gill" : > I was able to start a MR

CDH4 + Cassandra 1.2 Integration Issue

2013-04-09 Thread Gurminder Gill
I was able to start a MR job after patching Cassandra.Hadoop as per CASSANDRA-5201 . But then, ColumnFamilyRecordReader pukes within the MapTask. It is unable to read CF definition in the sample keyspace. The CF "user" does exist. *How can "cf

Re: Upgrade to Cassandra 1.2

2013-02-15 Thread Eric Evans
On Thu, Feb 14, 2013 at 5:48 PM, Daning Wang wrote: > Thanks! suppose I can upgrade to 1.2.x with 1 token by commenting out > num_tokens, how can I changed to multiple tokens? could not find doc clearly > stating about this. If you decided to move to virtual nodes after upgrading to 1.2, you can

Re: Upgrade to Cassandra 1.2

2013-02-15 Thread Alain RODRIGUEZ
There was a webinar (Datastax C*llege) about Vnodes. It will be available soon there I guess: http://www.datastax.com/resources/webinars/collegecredit. You could have watch it live and ask your own questions. Here is a howto: http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes

Re: Upgrade to Cassandra 1.2

2013-02-14 Thread Daning Wang
Thanks! suppose I can upgrade to 1.2.x with 1 token by commenting out num_tokens, how can I changed to multiple tokens? could not find doc clearly stating about this. On Thu, Feb 14, 2013 at 10:54 AM, Alain RODRIGUEZ wrote: > From: > http://www.datastax.com/docs/1.2/configuration/node_configura

Re: Upgrade to Cassandra 1.2

2013-02-14 Thread Alain RODRIGUEZ
From: http://www.datastax.com/docs/1.2/configuration/node_configuration#num-tokens About num_tokens: "If left unspecified, Cassandra uses the default value of 1 token (for legacy compatibility) and uses the initial_token. If you already have a cluster with one token per node, and wish to migrate t

Re: Upgrade to Cassandra 1.2

2013-02-14 Thread Daning Wang
Thanks Aaron and Manu. Since we are using 1.1, there is no num_taken parameter. when I upgrade to 1.2, should I set num_token=1 to start up, or I can set to other numbers? Daning On Tue, Feb 12, 2013 at 3:45 PM, Manu Zhang wrote: > num_tokens is only used at bootstrap > > I think it's also

Re: Upgrade to Cassandra 1.2

2013-02-12 Thread Manu Zhang
> > num_tokens is only used at bootstrap I think it's also used in this case (already bootstrapped with num_tokens = 1 and now num_tokens > 1). Cassandra will split a node's current range into *num_tokens* parts and there should be no change to the amount of ring a node holds before shuffling. O

Re: Upgrade to Cassandra 1.2

2013-02-12 Thread aaron morton
Restore the settings for num_tokens and intial_token to what they were before you upgraded. They should not be changed just because you are upgrading to 1.2, they are used to enable virtual nodes. Which are not necessary to run 1.2. Cheers - Aaron Morton Freelance Cassandra D

Re: Upgrade to Cassandra 1.2

2013-02-12 Thread Daning Wang
No, I did not run shuffle since the upgrade was not successful. what do you mean "reverting the changes to num_tokens and inital_token"? set num_tokens=1? initial_token should be ignored since it is not bootstrap. right? Thanks, Daning On Tue, Feb 12, 2013 at 10:52 AM, aaron morton wrote: > We

Re: Upgrade to Cassandra 1.2

2013-02-12 Thread aaron morton
Were you upgrading to 1.2 AND running the shuffle or just upgrading to 1.2? If you have not run shuffle I would suggest reverting the changes to num_tokens and inital_token. This is a guess because num_tokens is only used at bootstrap. Just get upgraded to 1.2 first, then do the shuffle when t

Re: Cassandra 1.2 Atomic Batches and Thrift API

2013-02-12 Thread Drew Kutcharian
ssandra client mailing list: > client-...@cassandra.apache.org. > > -- > Sylvain > > > On Tue, Feb 12, 2013 at 4:44 AM, Drew Kutcharian wrote: > Hey Guys, > > Is the new atomic batch feature in Cassandra 1.2 available via the thrift > API? If so, how can I use it? > > -- Drew > >

Re: Cassandra 1.2 Atomic Batches and Thrift API

2013-02-12 Thread Sylvain Lebresne
> ** ** > > ** ** > > *De :* Sylvain Lebresne [mailto:sylv...@datastax.com] > *Envoyé :* mardi 12 février 2013 10:19 > *À :* user@cassandra.apache.org > *Objet :* Re: Cassandra 1.2 Atomic Batches and Thrift API > > ** ** > > Yes, it's called atomic_batch_mutate a

RE: Cassandra 1.2 Atomic Batches and Thrift API

2013-02-12 Thread DE VITO Dominique
.org Objet : Re: Cassandra 1.2 Atomic Batches and Thrift API Yes, it's called atomic_batch_mutate and is used like batch_mutate. If you don't use thrift directly (which would qualify as a very good idea), you'll need to refer to whatever client library you are using to see if 1) s

Re: Cassandra 1.2 Atomic Batches and Thrift API

2013-02-12 Thread Sylvain Lebresne
If you are not sure what is the best way to contact the developers of you client library, then you may try the Cassandra client mailing list: client-...@cassandra.apache.org. -- Sylvain On Tue, Feb 12, 2013 at 4:44 AM, Drew Kutcharian wrote: > Hey Guys, > > Is the new atomic batch

Cassandra 1.2 Atomic Batches and Thrift API

2013-02-11 Thread Drew Kutcharian
Hey Guys, Is the new atomic batch feature in Cassandra 1.2 available via the thrift API? If so, how can I use it? -- Drew

Re: Upgrade to Cassandra 1.2

2013-02-11 Thread Daning Wang
Thanks Aaron. I tried to migrate existing cluster(ver 1.1.0) to 1.2.1 but failed. - I followed http://www.datastax.com/docs/1.2/install/upgrading, have merged cassandra.yaml, with follow parameter num_tokens: 256 #initial_token: 0 the initial_token is commented out, current token should be obta

Re: Upgrade to Cassandra 1.2

2013-02-04 Thread aaron morton
There is a command line utility in 1.2 to shuffle the tokens… http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes $ ./cassandra-shuffle --help Missing sub-command argument. Usage: shuffle [options] Sub-commands: create Initialize a new shuffle operation ls

Re: Upgrade to Cassandra 1.2

2013-02-03 Thread Manu Zhang
On Sun 03 Feb 2013 05:45:56 AM CST, Daning Wang wrote: I'd like to upgrade from 1.1.6 to 1.2.1, one big feature in 1.2 is that it can have multiple tokens in one node. but there is only one token in 1.1.6. how can I upgrade to 1.2.1 then breaking the token to take advantage of this feature? I we

Re: Issues with CQLSH in Cassandra 1.2

2013-02-02 Thread Gabriel Ciuloaica
Right, at that point either, cassandra-cli or cqlsh will not see any endpoint. Only after you drop the keyspace and re-create it with cassandra-cli will properly work. Thanks, Gabi On 2/3/13 2:15 AM, Manu Zhang wrote: On Tue 29 Jan 2013 03:55:52 AM CST, aaron morton wrote: I was able to repli

Re: Issues with CQLSH in Cassandra 1.2

2013-02-02 Thread Manu Zhang
On Tue 29 Jan 2013 03:55:52 AM CST, aaron morton wrote: I was able to replicate it… $ bin/nodetool -h 127.0.0.1 -p 7100 describering foo Schema Version:253da4a3-e277-35b5-8d04-dbeeb3c9508e TokenRange: TokenRange(start_token:3074457345618258602, end_token:-9223372036854775808, endpoints

Upgrade to Cassandra 1.2

2013-02-02 Thread Daning Wang
I'd like to upgrade from 1.1.6 to 1.2.1, one big feature in 1.2 is that it can have multiple tokens in one node. but there is only one token in 1.1.6. how can I upgrade to 1.2.1 then breaking the token to take advantage of this feature? I went through this doc but it does not say how to change the

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-02-01 Thread aaron morton
> Are there tickets/documents explain how data be replicated on Virtual Nodes? This http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2 Check the changes.txt file, they link to tickets. not many people use BOP so you may be exploring new'ish territory. Try asking someone on the IRC

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-31 Thread Manu Zhang
On Thu 31 Jan 2013 03:43:32 AM CST, Zhong Li wrote: Are there tickets/documents explain how data be replicated on Virtual Nodes? If there are multiple tokens on one physical host, may a chance two or more tokens chosen by replication strategy located on same host? If move/remove/add a token manua

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-30 Thread Zhong Li
Are there tickets/documents explain how data be replicated on Virtual Nodes? If there are multiple tokens on one physical host, may a chance two or more tokens chosen by replication strategy located on same host? If move/remove/add a token manually, does Cassandra Engine validate the case? T

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-30 Thread Zhong Li
> You add a physical node and that in turn adds num_token tokens to the ring. No, I am talking about Virtual Nodes with order preserving partitioner. For an existing host with multiple tokens setting list on cassandra.inital_token. After initial bootstrapping, the host will not aware changes of

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-30 Thread Manu Zhang
On Wed 30 Jan 2013 02:29:27 AM CST, Zhong Li wrote: One more question, can I add a virtual node manually without reboot and rebuild a host data? I checked nodetool command, there is no option to add a node. Thanks. Zhong On Jan 29, 2013, at 11:09 AM, Zhong Li wrote: I was misunderstood thi

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-29 Thread Zhong Li
One more question, can I add a virtual node manually without reboot and rebuild a host data? I checked nodetool command, there is no option to add a node. Thanks. Zhong On Jan 29, 2013, at 11:09 AM, Zhong Li wrote: > I was misunderstood this > http://www.datastax.com/dev/blog/virtual-node

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-29 Thread Zhong Li
I was misunderstood this http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2 , especially "If you want to get started with vnodes on a fresh cluster, however, that is fairly straightforward. Just don’t set the initial_token parameter in yourconf/cassandra.yaml and instead enable th

Re: Understanding Virtual Nodes on Cassandra 1.2

2013-01-29 Thread aaron morton
> After I searched some document on Datastax website and some old ticket, seems > that it works for random partitioner only, and leaves order preserved > partitioner out of the luck. Links ? > or allow add Virtual Nodes manually? If not looked into it but there is a cassandra.inital_token sta

Understanding Virtual Nodes on Cassandra 1.2

2013-01-28 Thread Zhong Li
Hi All, Virtual Nodes is great feature. After I searched some document on Datastax website and some old ticket, seems that it works for random partitioner only, and leaves order preserved partitioner out of the luck. I may misunderstand, please correct me. if it doesn't love order preserved par

Re: Issues with CQLSH in Cassandra 1.2

2013-01-28 Thread aaron morton
I was able to replicate it… $ bin/nodetool -h 127.0.0.1 -p 7100 describering foo Schema Version:253da4a3-e277-35b5-8d04-dbeeb3c9508e TokenRange: TokenRange(start_token:3074457345618258602, end_token:-9223372036854775808, endpoints:[], rpc_endpoints:[], endpoint_details:[]) Toke

Re: Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread Gabriel Ciuloaica
Hi Aaron, I'm using PropertyFileSnitch, an my cassandra-topology.propertis looks like this: /# Cassandra Node IP=Data Center:Rack// // //# default for unknown nodes// //default=DC1:RAC1// // //# all known nodes// // 10.11.1.108=DC1:RAC1// // 10.11.1.109=DC1:RAC2// // 10.11.1.200=DC1:RAC3 /

Re: Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread aaron morton
Can you provide details of the snitch configuration and the number of nodes you have? Cheers - Aaron Morton Freelance Cassandra Developer New Zealand @aaronmorton http://www.thelastpickle.com On 25/01/2013, at 9:39 AM, Gabriel Ciuloaica wrote: > Hi Tyler, > > No, it was jus

Re: Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread Gabriel Ciuloaica
Hi Tyler, No, it was just a typo in the email, I changed names of DC in the email after copy/paste from output of the tools. It is quite easy to reproduce (assuming you have a correct configuration for NetworkTopologyStrategy, with vNodes(default, 256)): 1. launch cqlsh and create the keyspac

Re: Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread Tyler Hobbs
Gabriel, It looks like you used "DC1" for the datacenter name in your replication strategy options, while the actual datacenter name was "DC-1" (based on the nodetool status output). Perhaps that was causing the problem? On Thu, Jan 24, 2013 at 1:57 PM, Gabriel Ciuloaica wrote: > I do not thi

Re: Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread Gabriel Ciuloaica
I do not think that it has anything to do with Astyanax, but after I have recreated the keyspace with cassandra-cli, everything is working fine. Also, I have mention below that not even "nodetool describering foo", did not showed correct information for the tokens, encoding_details, if the keys

Re: Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread Ivan Velykorodnyy
Hi, Astyanax is not 1.2 compatible yet https://github.com/Netflix/astyanax/issues/191 Eran planned to make it in 1.57.x четверг, 24 января 2013 г. пользователь Gabriel Ciuloaica писал: > Hi, > > I have spent half of the day today trying to make a

Issues with CQLSH in Cassandra 1.2

2013-01-24 Thread Gabriel Ciuloaica
Hi, I have spent half of the day today trying to make a new Cassandra cluster to work. I have setup a single data center cluster, using NetworkTopologyStrategy, DC1:3. I'm using latest version of Astyanax client to connect. After many hours of debug, I found out that the problem may be in cqls

Re: Cassandra 1.2 system.peers table

2013-01-17 Thread Nicolai Gylling
On Jan 17, 2013, at 11:54 AM, Sylvain Lebresne wrote: > Now, one of the nodes dies, and when I bring it back up, it does'nt join the > cluster again, but becomes it own node/cluster. I can't get it to join the > cluster again, even after doing 'removenode' and clearing all data. > > That obvi

Re: Cassandra 1.2 system.peers table

2013-01-17 Thread Sylvain Lebresne
> > Now, one of the nodes dies, and when I bring it back up, it does'nt join > the cluster again, but becomes it own node/cluster. I can't get it to join > the cluster again, even after doing 'removenode' and clearing all data. > That obviously should not have happened. That being said we have a f

Cassandra 1.2 system.peers table

2013-01-17 Thread Nicolai Gylling
Hi I have a cluster of 3 nodes running Cassandra v1.2 with num_tokens set to 256. It's running on EC2. When I installed the cluster, I took up one node with seed set to it's own IP. The next 2 had the first one as seed. A 'nodetool status' shows all 3 nodes up and running. Replicationfactor is

Re: Cassandra 1.2 thrift migration

2013-01-16 Thread aaron morton
, at 1:24 AM, Vivek Mishra wrote: > Hi, > Is there any document to follow, in case i migrate cassandra thrift API to > 1.2 release? Is it backward compatible with previous releases? > While migrating Kundera to cassandra 1.2, it is complaining on various data > types. Giv

Re: Cassandra 1.2 thrift migration

2013-01-15 Thread Vivek Mishra
Hi, Is there any document to follow, in case i migrate cassandra thrift API to 1.2 release? Is it backward compatible with previous releases? While migrating Kundera to cassandra 1.2, it is complaining on various data types. Giving weird errors like: While connecting from cassandra-cli

Re: Cassandra 1.2, wide row and secondary index question

2013-01-15 Thread Sylvain Lebresne
On Mon, Jan 14, 2013 at 11:55 PM, aaron morton wrote: > Sylvain, > Out of interest if the select is… > > select * from test where interval = 7 and severity = 3 order by id desc > ; > > Would the the ordering be a no-op or would it still run ? > Yes, as Shahryar said this is currently rejected b

Re: Cassandra 1.2, wide row and secondary index question

2013-01-14 Thread Shahryar Sedghi
Aaron If you have order buy whit a column with a secondary index in a where clause it fails with: Bad Request: ORDER BY with 2ndary indexes is not supported. Best Regards Shahryar On Mon, Jan 14, 2013 at 5:55 PM, aaron morton wrote: > Sylvain, > Out of interest if the select is… > > select

Re: Cassandra 1.2, wide row and secondary index question

2013-01-14 Thread aaron morton
Sylvain, Out of interest if the select is… select * from test where interval = 7 and severity = 3 order by id desc ; Would the the ordering be a no-op or would it still run ? Or more generally does including an ORDER BY clause that matches the CLUSTERING ORDER BY DDL clause incur ove

Re: Cassandra 1.2, wide row and secondary index question

2013-01-14 Thread Sylvain Lebresne
On Mon, Jan 14, 2013 at 5:04 PM, Shahryar Sedghi wrote: > Can I always count on this order, or it may change in the future? > I would personally rely on it. I don't see any reason why we would change that internally and besides I suspect you won't be the only one to rely on it so we won't take

Cassandra 1.2, wide row and secondary index question

2013-01-14 Thread Shahryar Sedghi
CQL 3 in Cassandra 1.2 does not allow order by when it is a wide row and a column with secondary index is used in a where clause which makes sense. So the question is: I have a test table like this: CREATE TABLE test( interval int, id uuid, severity int, PRIMARY KEY

Re: Cassandra 1.2 Thrift and CQL 3 issue

2013-01-14 Thread Vivek Mishra
If you want the "row key", just query it (we prefer the term "partition key" in CQL3 and that's the term you'll find in documents like http://cassandra.apache.org/doc/cql3/CQL.html but it's the same thing) and it'll be part of the return columns. I understand that, as i am able to fetch "partition

Re: Cassandra 1.2 Thrift and CQL 3 issue

2013-01-14 Thread Sylvain Lebresne
> > How to fetch and populate "row key" from CqlRow api then? If you want the "row key", just query it (we prefer the term "partition key" in CQL3 and that's the term you'll find in documents like http://cassandra.apache.org/doc/cql3/CQL.html but it's the same thing) and it'll be part of the re

  1   2   >