Looking at the debug log, I see
[2015-06-29 23:38:11] [main] DEBUG CqlRecordReader - cqlQuery SELECT
"wpid","value" FROM "qarth_catalog_dev"."product_v1" WHERE token("wpid")>?
AND token("wpid")<=? LIMIT 10
[2015-06-29 23:38:11] [main] DEBUG CqlRecordReader - created
org.apache.cassandra.hadoop.cql
Apologize, I meant version C* 2.0.16
The latest 2.1.7 source has a different WordCount example and this does
not use the CqlPagingInputFormat. I am comparing the differences to
understand why the change was made. But if you can shed some light on the
reasoning, it is much appreciated (and will save
I was going through the WordCount example in the latest 2.1.7 Apache C*
source and there is a reference to
org.apache.cassandra.hadoop.cql3.CqlPagingInputFormat, but it is not in
the source tree or in the compiled binary. Looks like we really cannot use
C* with Hadoop without a paging input format.
On Mon, Jun 29, 2015 at 2:43 PM, David Aronchick
wrote:
> Ping--- any thoughts here?
>
I don't have any thoughts on your specific issue at this time, but FWIW
#cassandra on freenode is sometimes a better forum for interactive
debugging of operational edge cases.
=Rob
Ping--- any thoughts here?
--
I posted this to StackOverflow with no response:
http://stackoverflow.com/questions/30744486/how-to-handle-failures-in-cassandra-when-node-goes-away
Basically, I'm trying to run Cassandra in a Kubernetes cluster, and trying
out what ha
All,
I converted one of my C* programs to Hadoop 2.x and C* datastax drivers for
2.1.0. The original program (Hadoop 1.x) worked fine when we specified
InputCQLPageRowSize and InputSplitSize to reasonable values. For example, if we
had 60K rows, a row size of 100 and split size of 1 will
So OpsCenter doesn’t have to be the same size but it’s better if it’s at least
a c3 or m3 instance? Also, is it possible to set up a cluster with OpsCenter as
a different size using the Datastax AMI?
Thanks,
Sid
On Fri, Jun 26, 2015 at 2:59 PM, Robert Coli wrote:
> On Fri, Jun 26, 2015
On Sun, Jun 28, 2015 at 10:46 AM, Anuj Wadehra
wrote:
> Thanks Jake!! But I think most people have 2.0.x in Production right now
> as 2.1.6 is very recently declared Production Ready. I think the bug is too
> important to be left open in 2.0.x as it leads to data loss. Should I open
> JIRA?
>
So
the Cluster info is :
Cluster Information:
Name: Status Cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
Schema versions:
85f8632f-5c43-3343-a73e-cef935a186ab: [172.19.105.58, 172.19.105.56,
172.19.105.57, 172.19.105.54, 172
error when node join cluster:
WARN 13:30:18 UnknownColumnFamilyException reading from socket; closing
org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find
cfId=91748db0-9af4-11e4-a861-0bbf95bc6f42
at
org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySeriali
Hi,
We're using Cassandra 2.0.8.39 through Datastax Enterprise 4.5.1 with a 9 nodes
cluster.
We need to add a few new nodes to the cluster but we're experiencing an issue
we don't know how to solve.
Here is exactly what we did :
- We had 8 nodes and need to add a few ones
- W
If you're looking to measure actual disk space, I'd use the du command,
assuming you're on a linux: http://linuxconfig.org/du-1-manual-page
On Mon, Jun 29, 2015 at 2:26 AM shahab wrote:
> Hi,
>
> Probably this question has been already asked in the mailing list, but I
> couldn't find it.
>
> The
Hi,
Probably this question has been already asked in the mailing list, but I
couldn't find it.
The question is how to measure disk-space used by a keyspace, column family
wise, excluding snapshots?
best,
/Shahab
One more update, it looks like the driver is generating this CQL statements:
SELECT
"test_id", "channel", "ts", "event", "groups" FROM "KEYSPACE"."test" WHERE
token("test_id") > ? AND token("test_id") <= ? ALLOW FILTERING;
Best regards,
Nathan
On Fri, Jun 26, 2015 at 8:16 PM Nathan Bijnens
14 matches
Mail list logo