The section *"Why does top report that Cassandra is using a lot more memory
than the Java heap max?" *on the page
https://cassandra.apache.org/doc/latest/cassandra/overview/faq/index.html
can provide some useful information. I have seen Cassandra tries to take
all the available free
I would recommend checking first for what exact metric do you have drops:
free memory or available memory. There is a common delusion about free vs
available memory in Linux: https://www.linuxatemyram.com/
Overwise if you really have spikes in *used* memory and these are spikes in
memory used by
Thanks Bowen and Jon for the clarification and suggestions! I will go
through them and dig more.
Yes, the JVM heap size is fixed and I can see it is allocated at all times.
The spikes I am referring to happen in addition to heap allocated memory.
I had tuned heap settings to resolve GC pause
Can you explain a bit more what you mean by memory spikes?
The defaults we ship use the same settings for min and max JVM heap size,
so you should see all the memory allocated to the JVM at startup. Did you
change anything here? I don't recommend doing so.
If you're referring to fi
Hi vignesh,
Correlation does not imply causation. I wouldn't work on the assumption
that the memory usage spikes are caused by compactions to start with.
It's best to prove the causal effect first. There's multiple ways to do
this, I'm just throwing in some ideas:
1
*Setup:*
I have a Cassandra cluster running in 3 datacenters with 3 nodes each
(total 9 nodes), hosted on GCP.
• *Replication Factor:* 3-3-3
• *Compaction Strategy:* LeveledCompactionStrategy
• *Heap Memory:* 10 GB (Total allocated memory: 32 GB)
• *Off-heap Memory:* around 4 GB
• *Workload
; I haven't found chunk cache to be particularly useful. It's a fairly
> small cache that could only help when you're dealing with a small hot
> dataset. I wouldn't bother increasing memory for it.
>
> Key cache can be helpful, but it depends on the workload. I genera
I haven't found chunk cache to be particularly useful. It's a fairly small
cache that could only help when you're dealing with a small hot dataset. I
wouldn't bother increasing memory for it.
Key cache can be helpful, but it depends on the workload. I generally
recommend
e about it? This way, we can focus on that.
Cheers,
Bowen
On 27/11/2023 14:59, Sébastien Rebecchi wrote:
Hello
When I use nodetool info, it prints that relevant information
Heap Memory (MB) : 14229.31 / 32688.00
Off Heap Memory (MB) : 5390.57
Key Cache : entries 670423, si
Hello
When I use nodetool info, it prints that relevant information
Heap Memory (MB) : 14229.31 / 32688.00
Off Heap Memory (MB) : 5390.57
Key Cache : entries 670423, size 100 MiB, capacity 100 MiB,
13152259 hits, 47205855 requests, 0.279 recent hit rate, 14400 save period
in
. Granted, it is not fun
to deal with tight space on a Cassandra cluster.
Sean R. Durity
From: Bowen Song
Sent: Tuesday, January 11, 2022 6:50 AM
To: user@cassandra.apache.org
Subject: [EXTERNAL] Re: about memory problem in write heavy system..
You don't really need 50% of free disk
#x27;t have
50% free disk space before upgrading Cassandra, you can choose to keep
the backup files elsewhere, or don't make a backup at all. The later is
of course not recommended for a production system.
On 11/01/2022 01:36, Eunsu Kim wrote:
Thank you Bowen.
As can be seen from the
Thank you Bowen.
As can be seen from the chart, the memory of existing nodes has increased since
new nodes were added. And I stopped writing a specific table. Write throughput
decreased by about 15%. And memory usage began to decrease.
I'm not sure if this was done by natural resolution
Maybe SSD's? Take a look at the IO read/write wait times.
FYI, your config changes simply push more activity into memory. Trading IO
for mem footprint ;{)
*Daemeon Reiydelle*
*email: daeme...@gmail.com *
*San Francisco 1.415.501.0198/Skype daemeon.c.m.reiydelle*
Cognitive Bias: (writt
3.11.4 is a very old release, with lots of known bugs. It's possible the
memory is related to that.
If you bounce one of the old nodes, where does the memory end up?
On Thu, Jan 6, 2022 at 3:44 PM Eunsu Kim wrote:
>
> Looking at the memory usage chart, it seems that the physical m
Looking at the memory usage chart, it seems that the physical memory usage of
the existing node has increased since the new node was added with
auto_bootstrap=false.
>
> On Fri, Jan 7, 2022 at 1:11 AM Eunsu Kim <mailto:eunsu.bil...@gmail.com>> wrote:
> Hi,
>
> I
g more than 90% of
physical memory. (115GiB /125GiB)
Native memory usage of some nodes is gradually increasing.
All tables use TWCS, and TTL is 2 weeks.
Below is the applied jvm option.
-Xms31g
-Xmx31g
-XX:+UseG1GC
-XX:G1RSetUpdatingPauseTimePercent=5
-XX:MaxGCPauseMill
Hi
I have two questions on how large the heap size in a Cassandra will be
using the new parameter |XX:maxRAMPercentage| in a kubernetes cluster
under the following conditions:
1) My JVM container has only (in the helm chart) Memory Request set but
Memory Limit NOT set in order to obtain
Shouldn't cause GCs.
You can usually think of heap memory separately from the rest. It's
already allocated as far as the OS is concerned, and it doesn't know
anything about GC going on inside of that allocation. You can set
"-XX:+AlwaysPreTouch" to make sure it's p
Thanks. I guess some earlier thread got truncated.
I already applied Erick's recommendations and that seem to have worked in
reducing the ram consumption by around 50%.
Regarding cheap memory and hardware, we are already running 96GB boxes and
getting multiple larger ones might be a l
, then go default one, and
increase capacity in memory, nowadays hardware is cheaper.
Thanks,
Jim
On Mon, Aug 2, 2021 at 7:12 PM Amandeep Srivastava <
amandeep.srivastava1...@gmail.com> wrote:
> Can anyone please help with the above questions? To summarise:
>
> 1) What is the impac
Missed the heap part, not sure why is that happening
On Tue, Aug 3, 2021 at 8:59 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:
> mmap is used for faster reads and as you guessed right you might see read
> performance degradation. If you are seeing high memory usage a
mmap is used for faster reads and as you guessed right you might see read
performance degradation. If you are seeing high memory usage after repairs
due to mmaped files, the only way to reduce the memory usage is to trigger
some other process which requires memory. *mmapped* files use buffer/cache
ons in my earlier
> email.
>
> Thanks a lot.
>
> Regards,
> Aman
>
> On Thu, Jul 29, 2021 at 12:52 PM Amandeep Srivastava <
> amandeep.srivastava1...@gmail.com> wrote:
>
>> Thanks, Bowen, don't think that's an issue - but yes I can try up
an issue - but yes I can try upgrading
> to 3.11.5 and limit the merkle tree size to bring down the memory
> utilization.
>
> Thanks, Erick, let me try that.
>
> Can someone please share documentation relating to internal functioning of
> full repairs - if there exists one? W
Thanks, Bowen, don't think that's an issue - but yes I can try upgrading to
3.11.5 and limit the merkle tree size to bring down the memory utilization.
Thanks, Erick, let me try that.
Can someone please share documentation relating to internal functioning of
full repairs - if there
Based on the symptoms you described, it's most likely caused by SSTables
being mmap()ed as part of the repairs.
Set `disk_access_mode: mmap_index_only` so only index files get mapped and
not the data files. I've explained it in a bit more detail in this article
-- https://community.datastax.com/qu
-heap memory and what can be done to cap/limit its use?
We run no other process apart from Cassandra daemon on these nodes.
Regards,
Aman
to 95%. Have
tried setting xmx to 24GB, 31GB, 32GB, and 64GB but all show the same
behavior. Could you please advise what might be consuming such high levels
of off-heap memory and what can be done to cap/limit its use?
We run no other process apart from Cassandra daemon on these nodes.
Regards,
Aman
Please dis-regard - this appears to be a netty issue not a
datastax/cassandra issue. My apologies!
-joe
On 5/24/2021 11:05 AM, Joe Obernberger wrote:
I'm getting the following error using 4.0RC1. I've increased direct
memory to 1g with: -XX:MaxDirectMemorySize=1024m
The error com
I'm getting the following error using 4.0RC1. I've increased direct
memory to 1g with: -XX:MaxDirectMemorySize=1024m
The error comes from an execute statement on a static
PreparedStatement. It runs fine for a while, and then dies.
Any ideas?
2021-05-24 11:03:10
Thanks a lot.
On Tue, 4 May 2021 at 19:51, Erick Ramirez
wrote:
> 2GB is allocated to the Reaper JVM on startup (see
> https://github.com/thelastpickle/cassandra-reaper/blob/2.2.4/src/packaging/bin/cassandra-reaper#L90-L91
> ).
>
> If you just want to test it out on a machine with only 8GB, you
2GB is allocated to the Reaper JVM on startup (see
https://github.com/thelastpickle/cassandra-reaper/blob/2.2.4/src/packaging/bin/cassandra-reaper#L90-L91
).
If you just want to test it out on a machine with only 8GB, you can update
the cassandra-reaper script to only use 1GB by setting -Xms1G and
Hi Surbhi,
I don't know the memory requirements, but speaking from my observation,
a single Cassandra Reaper instance with an external postgres database
storage backend, and managing a single small Cassandra cluster, the
Cassandra Reaper's Java process memory usage is slightly sh
Hi,
What are the memory requirements for Cassandra reaper?
I was trying to setup cassandra reaper on a 8GB box where cassandra is
taking 3GB heap size , but i got error "Cannot allocate memory"
Hence wanted to understand the memory requirements for cassandra reaper .
What should be t
solution as you won't be making full use of your available memory.
>
> raft.so - Cassandra consulting, support, and managed services
>
>
> On Fri, Apr 16, 2021 at 10:10 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Also,
>>
Yes that warning will still appear because it's a startup check and doesn't
take into account the disk_access_mode setting.
You may be able to cope with just indexes. Note this is still not an ideal
solution as you won't be making full use of your available memory.
raft.so - Cassa
Maximum
> number of memory map areas per process (vm.max_map_count) 65530 is too low,
> recommended value: 1048575, you can change it with sysctl.
On Thu, Apr 15, 2021 at 4:45 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> Thank you Kane and Jeff.
>
> can I su
x_only to use fewer maps (or disable it
> entirely as appropriate).
>
>
>
> On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson wrote:
>
>> Cassandra mmaps SSTables into memory, of which there can be many files
>> (including all their indexes and what not). Typically it'll
disk_acces_mode = mmap_index_only to use fewer maps (or disable it entirely
as appropriate).
On Thu, Apr 15, 2021 at 4:42 PM Kane Wilson wrote:
> Cassandra mmaps SSTables into memory, of which there can be many files
> (including all their indexes and what not). Typically it'll do
Cassandra mmaps SSTables into memory, of which there can be many files
(including all their indexes and what not). Typically it'll do so greedily
until you run out of RAM. 65k map areas tends to be quite low and can
easily be exceeded - you'd likely need very low density nodes to avoid
Hello All,
The recommended settings for Cassandra suggests to have a higher value for
vm.max_map_count than the default 65530
WARN [main] 2021-04-14 19:10:52,528 StartupChecks.java:311 - Maximum
> number of memory map areas per process (vm.max_map_count) 65530 is too low
> , recommended
Hi jeff,
Provided information below.
How can i check how much memory allocated to direct memory for jvm
On Thu, Jun 18, 2020 at 11:38 AM Jeff Jirsa wrote:
> Some things that are helpful:
>
> - What version of Cassandra
>
3.11.3
> - How much memory allocated to heap
>
Some things that are helpful:
- What version of Cassandra
- How much memory allocated to heap
- How much memory allocated to direct memory for the JVM
- How much memory on the full system
- Do you have a heap dump?
- Do you have a heap histogram?
- How much data on disk?
- What are your
Just to confirm, is this memory decline outside of the Cassandra process? If
so, I’d look at crond and at memory held for network traffic. Those are the
two areas I’ve seen leak. If you’ve configured to have swap=0, then you end up
in a position where even if the memory usage is stale
Hello,
Im seeing continuous decline in memory on a Cassandra instance used to have
20g free memory 15 days back and now its 15g and continue to go down. Same
instance it caused the cassandra instance crash before. Can you please give
me some pointers to look for which is causing continuous
e.org"
Date: Wednesday, April 15, 2020 at 10:34 PM
To: "user@cassandra.apache.org"
Subject: How quickly off heap memory freed by compacted tables is reclaimed
Message from External Sender
Hi
As we know data structures like bloom filters, compression metadata, index
summary are kept
Hi
As we know data structures like bloom filters, compression metadata, index
summary are kept off heap. But once a table gets compacted, how quickly
that memory is reclaimed by kernel.
Is it instant or it depends when reference if GCed?
Regards
Himanshu
It doesn't sound like you've had a good read of Michael Shuler's responses.
TL;DR it's not a Cassandra issue, it's a reporting issue. I recommend you
go and read Michael's response. Cheers!
>
usage should be around 9.6G and rest page cache is around 8G.
Bug free command shows only 12G is available for use. I want to understand
this discrepancy in memory consumption.
Regards
Himanshu
On Sun, Apr 12, 2020 at 10:28 AM Erick Ramirez
wrote:
> I observe long running nodes have high
>
> I observe long running nodes have high non heap memory footprint then a
> recently started node. That is the reason I am interested in find non heap
> memory usage by Cassandra node. What could be the reason of high non heap
> memory footprint in long running cluster?
>
I observe long running nodes have high non heap memory footprint then a
recently started node. That is the reason I am interested in find non heap
memory usage by Cassandra node. What could be the reason of high non heap
memory footprint in long running cluster?
Regards
Himanshu
On Sat, Apr 11
Hi Himanshu,
On Sat, Apr 11, 2020 at 8:47 PM HImanshu Sharma
wrote:
> I am not worried about page cache. I want to monitor memory pressure, I want
> to check that if heap+non heap usage goes above certain level then I can take
> certain action. But due to this page cache thing, I am
usage historically:
https://grafana.com/grafana/dashboards/5408
Kind regars,
Michael
On 4/11/20 12:47 PM, HImanshu Sharma wrote:
Hi
I am not worried about page cache. I want to monitor memory pressure, I
want to check that if heap+non heap usage goes above certain level then
I can take certain action
For some simple and helpful explanations of the behavior you are
observing, some ideas on what to look for in monitoring, as well as some
interesting experiments on the "play" page (last link), have a look at
https://www.linuxatemyram.com/ - this is general linux memory behavior
and
Hi
I am not worried about page cache. I want to monitor memory pressure, I
want to check that if heap+non heap usage goes above certain level then I
can take certain action. But due to this page cache thing, I am not sure
how to find actual memory usage( heap and off heap). Heap will not be more
One more point, if you are worried about high memory usage then read
about disk_access_mode
configuration of Cassandra. By default it will cause high memory usage.
Setting it to mmap_index_only can help.
On Sat, Apr 11, 2020 at 5:43 PM Laxmikant Upadhyay
wrote:
> Hi,
>
> You can rea
Hi,
You can read section 'OS Page Cache Usage' on
http://cassandra.apache.org/doc/latest/troubleshooting/use_tools.html
Also, don't worry about memory usage (page cache) not decreasing even if no
traffic ...it will come down when required (for example: a new application
needs
Hi
I am observing memory usage in top command, but there in RSS it is showing
18G ( which I think is sum of used memo + page cache). I want to know how
to find how much used by Cassandra process and how much of it is in page
cache. I want this information because I want to check memory usage for
You can look at top command. There is column of memory
Regards,
Nitan
Cell: 510 449 9629
> On Apr 11, 2020, at 11:10 AM, HImanshu Sharma
> wrote:
>
>
> Hi
>
> But I see memory not decreasing even if there is no traffic on cluster. How
> can I find actual m
Hi
But I see memory not decreasing even if there is no traffic on cluster. How
can I find actual memory usage by Cassandra process. If it is OS page
cache then how to find how much is page cache and how much is used by
process?
Thanks
Himanshu
On Sat, Apr 11, 2020 at 9:07 PM Laxmikant
It is OS page cache used during read..your os will leverage memory if not
being used by any other applications and it improves your read performance.
On Sat, Apr 11, 2020, 12:47 PM HImanshu Sharma
wrote:
> Hi
>
> I am very new to the use of cassandra. In a cassandra cluster of 3 node
Hi
I am very new to the use of cassandra. In a cassandra cluster of 3 nodes, I
am observing memory usage of cassandra process going more than the heap
memor allocated. As I understand, cassandra allocates off heap memory for
bloom filters, index summary etc.
When I run nodetool info, I see off
Probably helps to think of how swap actually functions. It has a valid place,
so long as the behavior of the kernel and the OOM killer are understood.
You can have a lot of cold pages that have nothing at all to do with C*. If
you look at where memory goes, it isn’t surprising to see things
ng on infra do not make sense from cost prospective, so
swap as option.
But here if environment is up running it will be interesting to understand
what is consuming memory and is infra sized correctly.
-Shishir
On Wed, 4 Dec 2019, 16:13 Hossein Ghiyasi Mehr,
wrote:
> "3. Though Da
al Solution for Data Gathering & Analysis*
*---*
On Tue, Dec 3, 2019 at 5:53 PM Shishir Kumar
wrote:
> Options: Assuming model and configurations are good and Data size per node
> less than 1 TB (though no such Benchmark).
>
> 1.
garbage collection, I get skeptical of how effectively
the O/S will determine what is good to swap. Most of the JVM memory in C*
churns at a rate that you wouldn’t want swap i/o to combine with if you cared
about latency. Not everybody cares about tight variance on latency though, so
there can be
Options: Assuming model and configurations are good and Data size per node
less than 1 TB (though no such Benchmark).
1. Infra scale for memory
2. Try to change disk_access_mode to mmap_index_only.
In this case you should not have any in memory DB tables.
3. Though Datastax do not recommended and
07 AM Reid Pinchback
wrote:
> Rahul, if my memory of this is correct, that particular logging message is
> noisy, the cache is pretty much always used to its limit (and why not, it’s
> a cache, no point in using less than you have).
>
>
>
> No matter what value you set, you’l
https://issues.apache.org/jira/browse/CASSANDRA-14416 )
On Mon, Dec 2, 2019 at 8:07 AM Reid Pinchback
wrote:
> Rahul, if my memory of this is correct, that particular logging message is
> noisy, the cache is pretty much always used to its limit (and why not, it’s
> a cache, no point in using less
Rahul, if my memory of this is correct, that particular logging message is
noisy, the cache is pretty much always used to its limit (and why not, it’s a
cache, no point in using less than you have).
No matter what value you set, you’ll just change the “reached (….)” part of it.
I think what
It may be helpful:
https://thelastpickle.com/blog/2018/08/08/compression_performance.html
It's complex. Simple explanation, cassandra keeps sstables in memory based
on chunk size and sstable parts. It manage loading new sstables to memory
based on requests on different sstables correctly
s will
schedule a job with io-scheduler--> the data is then read and returned
by the device drivers-> this fetched data from the disk is a
accumulated in a memory location ( file buffer) until the entire read
operation is complete-> then i guess the data is uncompressed>
pr
Thanks Hossein,
How does the chunks are moved out of memory (LRU?) if it want to make room
for new requests to get chunks?if it has mechanism to clear chunks from
cache what causes to cannot allocate chunk? Can you point me to any
documention?
On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr
Chunks are part of sstables. When there is enough space in memory to cache
them, read performance will increase if application requests it again.
Your real answer is application dependent. For example write heavy
applications are different than read heavy or read-write heavy. Real time
Hello,
We are seeing memory usage reached 512 mb and cannot allocate 1MB. I see
this because file_cache_size_mb by default set to 512MB.
Datastax document recommends to increase the file_cache_size.
We have 32G over all memory allocated 16G to Cassandra. What is the
recommended value in my
M size
> relative to whatever you allocated to the cgroup. Unfortunately I’m not a
> K8s developer (that may change shortly, but atm the case). What you need
> to a firm handle on yourself is where does the memory for the O/S file
> cache live, and is that size sufficient for your read/wri
firm handle on
yourself is where does the memory for the O/S file cache live, and is that size
sufficient for your read/write activity. Bare metal and VM tuning I understand
better, so I’ll have to defer to others who may have specific personal
experience with the details, but the essence of
different; though -
just generally - what do you think of MaxRAMFraction=2 with Java 8?
If the stateful set is configured with 16Gi memory, that setting would
allocate roughly 8Gi to the heap and seems a safe balance between
heap/nonheap. No worries if you don't have enough information to answer
using the C* rack metaphor to ensure you don’t co-locate replicas.
For example, were I you, I’d start asking myself if SSTable compression
mattered to me at all. The reason I’d start asking myself questions like that
is C* has multiple uses of memory, and one of the balancing acts is chunk cache
monitoring and alerting on
> memory, cpu and disk (PVs) thresholds). More specifically, the
> Prometheus JMX exporter (noted above) scrapes all the MBeans inside
> Cassandra, exporting in the Prometheus data model. Its config map filters
> (allows) our metrics of interest, an
alerting on
memory, cpu and disk (PVs) thresholds). More specifically, the
Prometheus JMX exporter (noted above) scrapes all the MBeans inside
Cassandra, exporting in the Prometheus data model. Its config map filters
(allows) our metrics of interest, and those metrics are sent to our Grafana
instances
ou perform disaster recovery backup?
>
>
> Best,
>
> Sergio
>
> Il giorno ven 1 nov 2019 alle ore 14:14 Ben Mills ha
> scritto:
>
>> Thanks Sergio - that's good advice and we have this built into the plan.
>> Have you heard a solid/consistent recomme
t's good advice and we have this built into the plan.
> Have you heard a solid/consistent recommendation/requirement as to the
> amount of memory heap requires for G1GC?
>
> On Fri, Nov 1, 2019 at 5:11 PM Sergio wrote:
>
>> In any case I would test with tlp-stress or Cassandr
Thanks Sergio - that's good advice and we have this built into the plan.
Have you heard a solid/consistent recommendation/requirement as to the
amount of memory heap requires for G1GC?
On Fri, Nov 1, 2019 at 5:11 PM Sergio wrote:
> In any case I would test with tlp-stress or Cassandr
Thanks Reid,
We currently only have ~1GB data per node with a replication factor of 3.
The amount of data will certainly grow, though I have no solid projections
at this time. The current memory and CPU resources are quite low (for
Cassandra) and so along with the upgrade we plan to increase both
minimum amount of memory that needs to be allocated to heap
> space when using G1GC?
>
> For GC, we currently use CMS. Along with the version upgrade, we'll be
> running the stateful set of Cassandra pods on new machine types in a new
> node pool with 12Gi memory per node. Not a lot
Maybe I’m missing something. You’re expecting less than 1 gig of data per
node? Unless this is some situation of super-high data churn/brief TTL, it
sounds like you’ll end up with your entire database in memory.
From: Ben Mills
Reply-To: "user@cassandra.apache.org"
Date: Friday,
Greetings,
We are planning a Cassandra upgrade from 3.7 to 3.11.5 and considering a
change to the GC config.
What is the minimum amount of memory that needs to be allocated to heap
space when using G1GC?
For GC, we currently use CMS. Along with the version upgrade, we'll be
running the sta
Hello team,
I am observing below warn and info message in system.log
1. Info log: maximum memory usage reached (1.000GiB), cannot allocate chunk
of 1 MiB.
I tried by increasing the file_cache_size_in_mb in Cassandra.yaml from 512
to 1024. But still this message shows up in logs.
2. Warn log
te: Wednesday, March 6, 2019 at 22:19
To: "user@cassandra.apache.org"
Subject: Re: Maximum memory usage reached
Also, that particular logger is for the internal chunk / page cache. If it
can’t allocate from within that pool, it’ll just use a normal bytebuffer.
It’s not really a
hat can easily fit in memory. Is there a reason why you’re picking
> Cassandra for this dataset?
>
>> On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev
>> wrote:
>> Hi All,
>>
>>
>>
>> We have a tiny 3-node cluster
>>
>> C* version 3
That’s not an error. To the left of the log message is the severity, level
INFO.
Generally, I don’t recommend running Cassandra on only 2GB ram or for small
datasets that can easily fit in memory. Is there a reason why you’re
picking Cassandra for this dataset?
On Thu, Mar 7, 2019 at 8:04 AM
-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - Maximum
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - Maximum
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO [pool-1
Can we the see “nodetool tablestats” for the biggest table as well.
From: Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID]
Sent: Sunday, February 10, 2019 7:21 AM
To: user@cassandra.apache.org
Subject: RE: Maximum memory usage
Okay, that’s at the moment it was calculated. Still need
Okay, that’s at the moment it was calculated. Still need to see histograms.
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Sunday, February 10, 2019 7:09 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage
Thanks Kenneth,
110mb is the biggest partition in
enneth Brotman
>
>
>
> *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Sunday, February 10, 2019 6:43 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Maximum memory usage
>
>
>
> ```Percentile SSTables Write Latency Read LatencyP
One of the other db with 100mb partition* out of memory happens very
frequently.
```Percentile SSTables Write Latency Read LatencyPartition
SizeCell Count
(micros) (micros) (bytes)
50% 0.00 0.00
Rahul,
Those partitions are tiny. Could you give us the table histograms for the
biggest tables.
Thanks,
Kenneth Brotman
From: Rahul Reddy [mailto:rahulreddy1...@gmail.com]
Sent: Sunday, February 10, 2019 6:43 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage
ning
> very often or logging a OOM.
>
> Dinesh
>
>
> On Wednesday, February 6, 2019, 6:19:42 AM PST, Rahul Reddy <
> rahulreddy1...@gmail.com> wrote:
>
>
> Hello,
>
> I see maximum memory usage alerts in my system.log couple of times in a
> day as INFO. So
1 - 100 of 927 matches
Mail list logo