It has given me weird results in terms of performance. You can try with
what Netflix is using they seem to be pretty happy with this version of
Cassandra.
http://techblog.netflix.com/2012/07/benchmarking-high-performance-io-with.html
On Fri, Jul 20, 2012 at 2:34 AM, aaron morton wrote:
> Can a r
One pointer to look at would be the memory snapshot and try tuning GC
around that if you find something fishy. Could be that it tries to load
everything in memory and then it has to do garbage collection which adds
pause times to it.
Turn on the GC logging by changing the parameters in conf/cassa
Go and look in the data directory on the disk and check if the files still
exist there?
On Thu, Jul 19, 2012 at 2:56 PM, Kirk wrote:
> What does "show schema" show? Is the CF showing up?
>
> Are the data files for the CF on disk?
>
> If you poke around with the system CFs, is there any data sti
are you using for writes? Latency increases
> for strong CL.
>
> If you want to increase throughput, try increasing the number of clients.
> Of course, it doesnt mean that throughtput will always increase. My
> observation was that it will increase and after certain number of clients
t to back up
> in nodetool tpstats. If you see it report dropped messages it is over
> loaded.
>
> Hope that helps.
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 18/07/2012, at 4:48 AM, Code Box
ty" they mean "throughput scales with the
>> amount of machines in your cluster".
>>
>> Try adding more machines to your cluster and measure the thoughput. I'm
>> pretty sure you'll see linear scalibility.
>>
>> regards,
>> Christian
>
I am doing Cassandra Benchmarking using YCSB for evaluating the best
performance for my application which will be both read and write intensive.
I have set up a three cluster environment on EC2 and i am using YCSB in the
same availability region as a client. I have tried various combinations of
tun