look at, I'm very happy to
do so.
A.
On 11/23/2015 12:07 PM, Antoine Bonavita wrote:
Sebastian,
I tried to ramp up volume with this new setting and ran into the same
problems.
After that I restarted my nodes. This pretty much instantly got read
latencies back to normal (< 5ms) on
d at 15%
(which is what I was seeing with max_sstable_age_days set at 5).
I'm really happy with the first item in my list but the other items seem
to indicate something is still wrong and it does not look like it's
compaction.
Any help would be truly appreciated.
A.
On 11/20/2015
transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.
On Wed, Nov 18, 2015 at 5:53 PM, Antoine Bonavita mailto:anto...@stickyads.tv>> wrote:
Sebastian,
Your help is very much appreciated. I re-read the blog post and also
abase technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to
any size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
Tue, Nov 17, 2015 at 11:08 AM, Sebastian Estevez
mailto:sebastian.este...@datastax.com>>
wrote:
You're sstables are probably falling out of page cache on the
smaller nodes and your slow disks are killing your latencies.
+1 most likely.
Are the heaps the same size on both machin
like any direction on what I should do to
get help.
Thanks,
Antoine.
On 11/16/2015 10:04 AM, Antoine Bonavita wrote:
Hello,
We have a performance problem when trying to ramp up cassandra (as a
mongo replacement) on a very specific use case. We store a blob indexed
by a key and expire it af
m 10 to 20.
So I thought this was related to the Memtable being flushed "too early"
on 32G machines. I increased memtable_heap_space_in_mb to 4G on the 32G
machines but it did not change anything.
At this point I'm kind of lost and could use any help in understanding
why I'