Dne 1.12.2011 23:30, Bill napsal(a):
> Our largest dataset has 1200 billion rows.
Radim, out of curiosity, how many nodes is that running across?
32
> Our largest dataset has 1200 billion rows.
Radim, out of curiosity, how many nodes is that running across?
Bill
On 28/11/11 13:44, Radim Kolar wrote:
I understand that my computer may be not as powerful as those used in
the other benchmarks,
but it shouldn't be that far off (1:30), right?
I understand that my computer may be not as powerful as those used in
the other benchmarks,
but it shouldn't be that far off (1:30), right?
cassandra has very fast writes. you can have read:write ratios like 1:1000
pure read workload on 1 billion rows without key/row cache on 2 node cluster
R