Hi,
(This is the 2nd time I'm sending this message. I sent it the first
time a few days ago but it does not appear in the archives.)
I have a follow up question on a question from February 2011. In
short, I wonder why one won't have to query all Cassandra nodes when
doing a secondary index lookup
Thanks William - so you were able to get everything running correctly, right?
FWIW, we're in the process of upgrading to 0.8.4 and found that all we needed
was that first link you mentioned - the VersionedValue modification. It's
running fine on our staging cluster and we're in the process of m
Hi,
I am new to Cassandra.
I have some questions about the cluster partitioner.
It looks like once I set the partitioner, there is no way to change it
later.
However, in our application, some keyspaces need to be sorted based on key,
some don't.
Is there a flexible way to define a cluster to con
Thank you guys.
I installed jna using yum and then put jna.jar on the classpath and
everything seems fine.
On Thu, Sep 1, 2011 at 9:51 AM, Eric Evans wrote:
> On Thu, Sep 1, 2011 at 10:13 AM, Eric Czech wrote:
> > I got it here : https://nodeload.github.com/twall/jna/tarball/master
> > Is the
I've had some troubles, so I thought I'd pass on my various bug fixes:
-Cass 0.8.4 has troubles with pig/hadoop (you get NPE's when trying to connect
to cassandra in the pig logs). You need this patch:
http://svn.apache.org/viewvc?revision=1158940&view=revision
And maybe this:
http://svn.apache.
Kevin,
You will find that many of us using cassanda are already doing what you suggest
(custom serializer/deserializer).
We call it JSON.
--
Colin
*Sent from Star Trek like flat panel device, which although larger than my Star
Trek like communicator device, may have typo's and exhibit imprope
Hi
I've already set Long rpc_timeout_in_ms = Long.MAX_VALUE;
How can i avoid time out exception, when debugging the cassandra server.
Best Regards!
Yi Wang(Jenny)
Hi:
I created a new super column family and insert 1000 super columns into it in a
short time. Cassandra 0.8 was running on my laptop. When inserting, the system
was continuing flushing to files and compaction. I use the default
configuration. I just can't figure out why the size of data file i
It seems that you did not set the
AbstractComposite.ComponentEquality.GREATER_THAN_EQUAL
to the endComp. You could try something like:
endComp=new Compsite() ;
endComp.addComponent(timeUUID) ;
endComp.addComponent("ACTIVE", StringSerializer.get(), "UTF8Type",
AbstractComposite.ComponentEquality.GR
Then what will be the sweetspot for Cassandra? I am more interested in
Cassandra because my application is write heavy.
Till now what I have understood is that Cassandra will not work best for SANs
too?
P.S
Mongodb is also a nosql database and designed for horizontal scaling then how
its good
Hi there,
I had a 3 nodes ring, added a 4th one, and moved others to appropriate
tokens..doing nodetool ring shows:
127.0.0.1 datacenter1 rack1 Up Normal 348.82 MB
25.00% 0
127.0.0.2 datacenter1 rack1 Up Normal 349.81 MB
25.00% 42535295865117307932921825928971
11 matches
Mail list logo