Looks like you are using vnodes, use nodetool status instead.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 10/05/2013, at 11:58 PM, Nikolay Mihaylov wrote:
> do you use vnodes ?
>
>
> On Fri, May 10, 2013 at 1
> Let's say we're seing some bug in C*, and SSTables doesn't get deleted during
> compaction (which I guess is the only reason for this consumption of
> diskspace).
Just out of interest can you check the number of SSTables reported by nodetool
cfstats for a CF against the number of *-Data.db f
> After several cycles, pycassa starts getting connection failures.
Do you have the error stack ?
Are the TimedOutExceptions or socket time outs or something else.
> Would things be any different if we used multiple nodes and scaled the data
> and worker count to match? I mean, is there somethin
Maybe I should ask the question a different way.
Currently, if all index samples do not fit in the java heap the jvm will
eventually OOM and the process will crash. The proposed change sounds like
it will move the index samples to off-heap storage but that if that can't
hold all samples, the proc
So will cassandra provide a way to limit its off-heap usage to avoid
unexpected OOM kills? I'd much rather have performance degrade when 100%
of the index samples no longer fit in memory rather than the process being
killed with no way to stabilize it without adding hardware or removing data.
-Br
Hi
Haven't visit to this forum couple of months and want to upgrade our current
production Cassandra cluster (4 nodes 1.0.11) to 1.2.X latest versions.
Is this kind of the straight upgrade or different?
Thanks & Regards
/Roshan
--
View this message in context:
http://cassandra-user-incubat
Being token aware makes a big performance difference. We do that internally
with out client and it means a lot for 95 percentile time. If Astynax is
not vnode token aware and your using them you could see worse performance.
A long time "beef" with the client libraries is that they are always
chasi
Whenever I mention the limit in a talk I say, "2 Billion columns" in a faux
10 year old voice :). Cassandra can have a 2billion column row. A 60MB row
in row cache will make the JVM sh*t the bed. (row cache you should not use
anyway). As rcoli points out a 35 GB row, I doubt you can do anything wit
On Sun, May 12, 2013 at 6:26 PM, Edward Capriolo wrote:
> 2 billion is the maximum theoretically limit of columns under a row. It is
> NOT the maximum limit of a CQL collection. The design of CQL collections
> currently require retrieving the entire collection on read.
Each column has a byte over
This is also my first post here :).
While CQL3 is recommended for new projects, Thrift isn't going anywhere. You
don't necessarily need to use the binary protocol for CQL3 either. You can
execute CQL3 queries through Thrift. As far as I know, the new binary protocol
is still beta in 1.2.
Faraaz
Ah, okay iostat -x NEEDS a number like "iostat -x 5" works better(first
one always shows 4% util while second one shows 100%). Iotop seems a bit
better here.
So we know that since we added our new node, we are slammed with read and
no one is running compations according to "clush -g datanodes nod
using GET or LIST from the cli will do what you want
it's a bad idea to have One Big Partition, since partitions by nature
are not spread across multiple machines. in general you'll want to
keep partitions under ~ 1M cells or ~100K CQL3 rows.
On Sun, May 12, 2013 at 12:53 AM, Sam Mandes wrote:
We running a pretty consistent load on our cluster and added a new node to a 6
node cluster Friday(QA worked great, but production not so much). One mistake
that was made was starting up the new node, then disabling the firewall :(
which allowed nodes to discover it BEFORE the node bootstrapped
On Sat, May 11, 2013 at 6:56 AM, Rodrigo Felix
wrote:
> What does the first line of bin/nodetool ring output means? It has the same
> token of the down node.
No, it doesn't. It is displaying the token of the highest node to
indicate to you that the token range is a ring, and that the first
node i
Collections that big are likely not what you want. Many people are using
cassandra because they want low latency reads <10ms on smallish row keys or
key slices. Attempting to get 10K + columns in one go generally does not
work well. First, there is network issues 100K columns of 5 bytes requires
la
Hi, I am new in Cassandra, I am using "Aquiles.Cassandra10" , but I don't know
why longtype in cli like blow.
can any one kown how to display real long in Cassandra-cli, I know there are
different between c# byte and java byte. But thirfty has already conver little
endian to big endian
[defaul
16 matches
Mail list logo