Hi Peter,
Thanks for sending this over. I dont know how 100 Bytes (10 bytes of data *
10 columns) can represent anything useful? These days it is better to
benchmark things around 1KB.
Thanks!
On Mon, Oct 31, 2016 at 4:58 PM, Peter Reilly
wrote:
> The original article
> http://techblog.netflix
The original article
http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html
On Mon, Oct 31, 2016 at 5:57 PM, Peter Reilly
wrote:
> From the article:
> java -jar stress.jar -d "144 node ids" -e ONE -n 2700 -l 3 -i 1 -t 200
> -p 7102 -o INSERT -c 10 -r
>
> The client i
>From the article:
java -jar stress.jar -d "144 node ids" -e ONE -n 2700 -l 3 -i 1 -t 200
-p 7102 -o INSERT -c 10 -r
The client is writing 10 columns per row key, row key randomly chosen from
27 million ids, each column has a key and 10 bytes of data. The total on
disk size for each write incl
Hi Guys,
I keep reading the articles below but the biggest questions for me are as
follows
1) what is the "data size" per request? without data size it hard for me to
see anything sensible
2) is there batching here?
http://www.datastax.com/1-million-writes
http://techblog.netflix.com/2014/07/r