I see the same thing here. I have tried to do some maths including
timestamps, columns name, keys and raw data but in the end cassandra reports
a cluster size from 2 to 3 times bigger than the raw data. I am surely
missing something in my formula + i have a lot of free hard drive space, so
it's not
ven after
> that, but a couple of days later, it's looking pretty uneven.
>
>
> On Sun, Jun 20, 2010 at 10:21 AM, Jordan Pittier - Rezel > wrote:
>
>> Hi,
>> Have you tried nodetool repair (or cleanup) on your nodes ?
>>
>>
>> On Sun, Jun 20, 2010
Hi,
Have you tried nodetool repair (or cleanup) on your nodes ?
On Sun, Jun 20, 2010 at 4:16 PM, James Golick wrote:
> I just increased my cluster from 2 to 4 nodes, and RF=2 to RF=3, using RP.
>
> The tokens seem pretty even on the ring, but two of the nodes are far more
> heavily loaded than t
Hi,
Regarding point c), you should ask your self, "what is good performance for
me ?". The read performance mainly depends on how fast your hard drives are
and how many rows you can maintain in cache. With such a small cluster, if
you want "good" read performance, you better have fast hard drive an
For sure you have to pay particular attention to memory allocation on each
node, especially be sure your servers dont swap. Then you can monitor how
load are balanced among your nodes (nodetools -h XX ring).
On Tue, May 11, 2010 at 11:46 PM, S Ahmed wrote:
> If you have 3-4 nodes, how do you mon
I'm facing the same issue with swap. It only occurs when I perform read
operations (write are very fast :)). So I can't help you with the memory
probleme.
But to balance the load evenly between nodes in cluster just manually fix
their token.(the "formula" is i * 2^127 / nb_nodes).
Jordzn
On Tue,
Dont forget to count timestamps for each column.
2010/4/30 Bingbing Liu
> hi,
>
> thanks for your help.
>
> i run the nodetool -h compact
>
> but the load keep the same , is there anyone can tell me why?
>
>
> 2010-04-30
> --
> Bingbing Liu
> --
Hi,
Here is a working example :
$mutation_map = array("$key"=>array("Standard1" => array()));
for($column_name=0; $column_name<$options['numcolumns']; $column_name++)
{
$column = new cassandra_Column(array('name' => "$column_name", 'value'
=> 'put your data here', 'timestamp' =>
For those who can't wait :
http://perso.rezel.net/cassandra_0.6.0-1_all.deb
md5sum is 6dd71e18e1e0239e50302098d395536e
Based on https://svn.apache.org/repos/asf/cassandra/tags/cassandra-0.6.0/
On Tue, Apr 13, 2010 at 7:43 PM, Ned Wolpert wrote:
> Is 0.6.0 a repackage of 0.6.0rc1? If we're running
First, read carefully and understand :
http://wiki.apache.org/cassandra/ThriftExamples#PHP
But you really shouldn't bother with benchmarks. Ask yourself this question
: "what if my Cassandra performs at 5k operation/s ? And what about 3k
op/s?". In other terms "why are you benchmarking ?". You've
Hi,
If you really want to benchmark your box, you should concidere not using
Pandra nor any library built upon Thrift. They all come with a (small)
overhead.
I also realized when I made my first benchmark that most of my box's
ressources was used by the benchmarking tool it self and not by Canssan
>Could you gimme some clue plz ?
"This is a known problem with 0.5 that was addressed in 0.6."
It seems you posted twice for the same issue
On Wed, Apr 7, 2010 at 6:12 PM, Oleg Anastasjev wrote:
>
> Jonathan Ellis gmail.com> writes:
>
> >
> > Isn't this the same question I just answered?
>
12 matches
Mail list logo