Here's the metrics you want:
http://cassandra.apache.org/doc/latest/operating/metrics.html#table-metrics

The best practice is to run fewer bigger tables. If it's a lot of tables
you're likely out of luck aside from throwing more RAM at the problem.

All the best,



Sebastián Estévez | Vanguard Solution Architect

Mobile +1.954.905.8615

sebastian.este...@datastax.com  |  datastax.com

<https://www.linkedin.com/company/datastax>
<https://www.facebook.com/datastax> <https://twitter.com/datastax>
<http://feeds.feedburner.com/datastax> <https://github.com/datastax/>

<https://www.cvent.com/events/datastax-accelerate/registration-15f868df73ed46488d1d159da20858d4.aspx?r=cc6bf9aa-c933-4d93-9501-904fd550e1ad&refid=mainreglink&fqp=true>

20% Discount Code: estevez20


On Fri, Oct 26, 2018 at 6:25 PM Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:

> anyone has any idea on this?
>
> On Thu, Oct 25, 2018 at 11:35 AM Jai Bheemsen Rao Dhanwada <
> jaibheem...@gmail.com> wrote:
>
>> Hello,
>>
>> I am running into a situation where huge schema (# of CF) causing OOM
>> issues to the heap. is there a way to measure how much size each column
>> family uses in the heap?
>>
>

Reply via email to