The IOPS dropped with the drop in read I/O throughput. The cassandra reads
and network sent/revd is the same.
We also did not adjust our heap size at 2x. between cachestats and thinking
about how mmap works (probably doesn't formally access files in a way
inotify monitors would detect) and given o
What isn't clear is whether or not the slow IO is a result of doing far
fewer IO and serving from RAM or if slow IO is slow IO on same number of
reads/second. I assume you're doing far fewer IO, and the slowness is a
sampling error.
Do you know how many ops/second you're reading from each disk? Cl
Linux has crappy instrumentation on the file cache.
I tried the cachestats on perf-tools, it is producing negative numbers on
cache hits on the 2x.
If the files are mmap'd, would that bypass any inotify detection when a
file access occurs aka a page fault? I'm guessing yes
On Wed, Dec 2, 2020 at
>From C* 2.2 onwards, SSTables get mapped to memory by mmap() so the hot
data will be accessed much faster on systems with more RAM.
On Thu, 3 Dec 2020 at 09:57, Carl Mueller
wrote:
> I agree in theory, I just want some way to confirm that file accesses in
> the larger instance are being interce
I agree in theory, I just want some way to confirm that file accesses in
the larger instance are being intercepted by the file cache, vs what is
happening in the other case.
I've tried amy tobey's pcstat
I'd assume the 2x would have a file cache with lots of partial caches of
files, churn in the
heap is the normal proportional, probably 1/2 RAM. So there definitely will
be larger non-heap for file caching.
The Amy Tobey utility does not show churn in the file cache however, which
if it was almost three orders of magnitude difference in amount of disk
access, I would expect churn in the OS
This is exactly what I would expect when you double the memory and all of
the data lives in page cache.
On Wed, Dec 2, 2020 at 8:41 AM Carl Mueller
wrote:
> Oh, this is cassandra 2.2.13 (multi tenant delays) and ubuntu 18.04.
>
> On Wed, Dec 2, 2020 at 10:35 AM Carl Mueller
> wrote:
>
>> We ha
Is the heap larger on the M5.4x instance?
Are you sure it's Cassandra generating the read traffic vs just evicting
files read by other systems?
In general, I'd call "more RAM means fewer drive reads" a very expected
result regardless of the details, especially when it's the difference
between fitt
Oh, this is cassandra 2.2.13 (multi tenant delays) and ubuntu 18.04.
On Wed, Dec 2, 2020 at 10:35 AM Carl Mueller
wrote:
> We have a cluster that is experiencing very high disk read I/O in the
> 20-40 MB/sec range on m5.2x (gp2 drives). This is verified via VM metrics
> as well as iotop.
>
> Whe
We have a cluster that is experiencing very high disk read I/O in the 20-40
MB/sec range on m5.2x (gp2 drives). This is verified via VM metrics as well
as iotop.
When we switch m5.4x it drops to 60 KB/sec.
There is no difference in network send/recv, read/write request counts.
The graph for read
10 matches
Mail list logo