Re: what this error mean

2015-05-28 Thread Jason Wee
why it happened? from the code, it looks like this condition is not null https://github.com/apache/cassandra/blob/cassandra-2.1.3/src/java/org/apache/cassandra/io/sstable/SSTableReader.java#L921 or you can quickly fix this by upgrading to 2.1.5, i noticed there is code change for this class https:

Re: Start with single node, move to 3-node cluster

2015-05-28 Thread Ajaya Agrawal
If it is necessary to start the cluster now then create 3 vms in one machine and start up the cluster. The performance would not be as good as 3 individual nodes, but it will do the job for time being. Later when more nodes arrive start decommissioning vms one by one and add the physical nodes. Dec

what this error mean

2015-05-28 Thread 曹志富
I have a 25 noedes C* cluster with C* 2.1.3. These days a node occur split brain many times。 check the log I found this: INFO [MemtableFlushWriter:118] 2015-05-29 08:07:39,176 Memtable.java:378 - Completed flushing /home/ant/apache-cassandra-2.1.3/bin/../data/data/system/sstable_activity-5a1

Re: Spark SQL JDBC Server + DSE

2015-05-28 Thread Brian O'Neill
Mohammed, This doesn¹t really answer your question, but I¹m working on a new REST server that allows people to submit SQL queries over REST, which get executed via Spark SQL. Based on what I started here: http://brianoneill.blogspot.com/2015/05/spark-sql-against-cassandra-example. html I assume

RE: Spark SQL JDBC Server + DSE

2015-05-28 Thread Mohammed Guller
Anybody out there using DSE + Spark SQL JDBC server? Mohammed From: Mohammed Guller [mailto:moham...@glassbeam.com] Sent: Tuesday, May 26, 2015 6:17 PM To: user@cassandra.apache.org Subject: Spark SQL JDBC Server + DSE Hi - As I understand, the Spark SQL Thrift/JDBC server cannot be used with th

Re: 10000+ CF support from Cassandra

2015-05-28 Thread Jack Krupansky
How big is each of the tables - are they all fairly small or fairly large? Small as in no more than thousands of rows or large as in tens of millions or hundreds of millions of rows? Small tables are are not ideal for a Cassandra cluster since the rows would be spread out across the nodes, even th

Re: Cassandra 1.2.x EOL date

2015-05-28 Thread Robert Coli
On Wed, May 27, 2015 at 5:10 PM, Jason Unovitch wrote: > Simple and quick question, can anyone point me to where the Cassandra > 1.2.x series EOL date was announced? I see archived mailing list > threads for 1.2.19 mentioning it was going to be the last release and > I see CVE-2015-0225 mention

Re: Cassandra seems to replace existing node without specifying replace_address

2015-05-28 Thread Robert Coli
On Thu, May 28, 2015 at 2:00 AM, Thomas Whiteway < thomas.white...@metaswitch.com> wrote: > Sorry, I should have been clearer. In this case we’ve decommissioned > the node and deleted the data, commitlog, and saved caches directories so > we’re not hitting CASSANDRA-8801. We also hit the “A nod

Re: 10000+ CF support from Cassandra

2015-05-28 Thread Jonathan Haddad
While Graham's suggestion will let you collapse a bunch of tables into a single one, it'll likely result in so many other problems it won't be worth the effort. I strongly advise against this approach. First off, different workloads need different tuning. Compaction strategies, gc_grace_seconds,

Re: cassandra.WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' responses]

2015-05-28 Thread Jean Tremblay
I have experienced similar results: OperationTimedOut after inserting many millions of records on a 5 nodes cluster, using Cassandra 2.1.5. I rolled back to 2.1.4 using identically the same configuration as with 2.1.5 and these timeout went away… This is not the solution to your problem but just

cassandra.WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes' responses]

2015-05-28 Thread Sachin PK
Hi I'm running Cassandra 2.1.5 ,(single datacenter ,4 node,16GB vps each node ),I have given my configuration below, I'm using python driver on my clients ,when i tried to insert 1049067 items I got an error. cassandra.WriteTimeout: code=1100 [Coordinator node timed out waiting for replica nodes'

Re: 10000+ CF support from Cassandra

2015-05-28 Thread Graham Sanderson
Depending on your use case and data types (for example if you can have a minimally Nested Json representation of the objects; Than you could go with a common map representation where keys are top love object fields and values are valid Json literals as strings; eg unquoted primitives, quoted str

RE: Cassandra seems to replace existing node without specifying replace_address

2015-05-28 Thread Thomas Whiteway
Sorry, I should have been clearer. In this case we’ve decommissioned the node and deleted the data, commitlog, and saved caches directories so we’re not hitting CASSANDRA-8801. We also hit the “A node with address already exists, cancelling join” error when performing the same steps on 2.1.0,