Hi guys, I was wondering how should hadoop and cassandra integrate with
each other? Currently I am running a spring-data framework to integrate
those 2. I wonder if there are other ways. Also is it possible to access
cassandra from remote facility? thanks a lot!
This line always returns "0" because the key ByteBuffer has already been
read from.
startToken
=
partitioner.getTokenFactory().toString(partitioner.getToken(Iterables.getLast(rows).key));
I was able to get it to work by using .mark() and .reset() on the buffer.
I'll log a bug, but confused as to
Hey all,
I'm having an issue using ColumnFamilyInputFormat in an hadoop job. The
mappers spin out of control and just keep reading records over and over,
never getting to the end. I have CF with wide rows (although none is past
about 5 at the columns at the moment), I've tried setting wide rows
I'm not sure of the specific error but you may not want to be using 0.21.
"23 August, 2010: release 0.21.0 available
This release contains many improvements, new features, bug fixes and
optimizations. It has not undergone testing at scale and should not be
considered stable or suitable for produ
When I try to read a CF from Hadoop, just after issuing the run I get this
error:
Exception in thread "main" java.lang.IncompatibleClassChangeError: Found
interface org.apache.hadoop.mapreduce.JobContext, but class was expected
at
org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSpli