We are running OpenJDK7 with G1GC and encountered no issues so far. We took
the tuning parameters from the Cassandra 3.0 branch.
Kind regards,
Nathan
On Mon, Sep 28, 2015 at 6:25 AM Kevin Burton wrote:
> Possibly for existing apps… we’re running G1 for everything except
> Elasticsearch and Ca
One more update, it looks like the driver is generating this CQL statements:
SELECT
"test_id", "channel", "ts", "event", "groups" FROM "KEYSPACE"."test" WHERE
token("test_id") > ? AND token("test_id") &l
Thanks for the suggestion, will take a look.
Our code looks like this:
val rdd = sc.cassandraTable[EventV0](keyspace, "test")
val transformed = rdd.map{e => EventV1(e.testId, e.ts, e.channel,
e.groups, e.event)}
transformed.saveToCassandra(keyspace, "test_v1")
Not sure if this code might transl
We are using the Spark Cassandra driver, version 1.2.0 (Spark 1.2.1)
connecting to a 6 node bare metal (16gb ram, Xeon E3-1270 (8core), 4x 7,2k
SATA disks) Cassandra cluster. Spark runs on a separate Mesos cluster.
We are running a transformation job, where we read the complete contents of
a table
I strongly disagree with recommending to use version 2.1.x. It only very
recently became more or less stable. Anything before 2.1.5 was unusable.
You might be better of with a recent 2.0.n version.
Best regards,
Nathan
On Fri, Jun 26, 2015 at 3:36 PM Marcos Ortiz wrote:
> Regards, Susanne.
>
For Analytics workloads combining Spark and Cassandra will bring you lots
of flexibility and performance. However you will have to setup and learn
Spark. The Spark Cassandra connector is very performant and a joy to work
with.
N.
On Wed, Apr 22, 2015 at 4:09 PM Matthew Johnson
wrote:
> Our requ
We had some serious issues with 2.1.3:
- Bootstrapping a new node resulted in OOM
- Repair resulted in an OOM on several nodes
- When reading some parts of the data it caused cascading crashes on all
it's replica nodes.
Downgrading to the 2.0.X branch didn't work because of some
incompatibilities,
We are getting a OOM when adding a new node to an existing cluster. In the
heapdump we found that this thread caused the OutOfMemory exception:
"SharedPool-Worker-10" daemon prio=5 tid=440 RUNNABLE
at java.lang.OutOfMemoryError.(OutOfMemoryError.java:48)
at java.nio.HeapByteBuffer.(HeapByteBuffer