I have a Counter defined as super column family
*create column family TestCounter*
*with column_type = Super*
*and default_validation_class = CounterColumnType;*
After i increment/decrement counter columns, cassandra-cli shows super
column, column and key name as hex values, how do i get
On Thu, 2011-05-26 at 20:51 +0200, Kwasi Gyasi - Agyei wrote:
> CREATE COLUMNFAMILY magic (KEY text PRIMARY KEY, monkey ) WITH
> comparator = text AND default_validation = text
That's not a valid query. If monkey is a column definition, then it
needs a type. If you're trying to name the key, don
Hi Everyone,
Other than cron, is anyone using anything fancy to automate and manage
the execution of some funtastic tasks, like 'nodetool repair' on all
the nodes in their ring?
--
Sasha Dolgy
sasha.do...@gmail.com
How about implementing a freezing mechanism on counter columns.
If there are no more increments within "freeze" seconds after the last
increments (it would be orders or day or so); the column would lock itself
on increments and won't accept increment.
And after this freeze perioid, the ttl should
Hello,
Actually I did not have the patience to discover more on what's going on. I
had to drop the CF and start from scratch.
Even though there were no writes to those particular columns, while reading
at CL.ONE
there was a 50% chance that
- The query returned the correct value (51664)
- The quer
On Sat, May 28, 2011 at 5:43 AM, Jonathan Colby
wrote:
> It might just not have occurred to me in the previous 0.7.4 version,
> but when I do a repair on a node in v0.7.6, it seems like data is also
> synced with neighboring nodes.
This has always been repair's behavior, yes.
--
Jonathan Ellis
It might just not have occurred to me in the previous 0.7.4 version,
but when I do a repair on a node in v0.7.6, it seems like data is also
synced with neighboring nodes.
My understanding of repair is that the data is reconciled one the node
being repaired. i.e., data is removed or added to that n
OK, is seems a "phantom" node (one that was removed from the cluster)
kept being passed around in gossip as a down endpoint and was messing
up the gossip algorithm. I had the luxury of being able to stop the
entire cluster and bring the nodes up one by one. That purged the bad
node from gossip.