On Thu, Aug 20, 2015 at 2:35 PM, Dałek, Piotr
<[email protected]> wrote:
>> -----Original Message-----
>> From: [email protected] [mailto:ceph-devel-
>> [email protected]] On Behalf Of Blinick, Stephen L
>> Sent: Wednesday, August 19, 2015 6:58 PM
>>
>> [..
>> Regarding the all-HDD or high density HDD nodes, is it certain these issues
>> with tcmalloc don't apply, due to lower performance, or would it potentially
>> be something that would manifest over a longer period of time
>> (weeks/months) of running?   I know we've seen some weirdness attributed
>> to tcmalloc on our 10-disk 20-node cluster with HDD's &  SSD journals, but it
>> took a few weeks.
>
> And it takes me just a few minutes with rados bench to reproduce this issue 
> on mixed-storage node (SSDs, SAS disks, high-capacity SATA disks, etc).
> See here: http://ceph.predictor.org.pl/cpu_usage_over_time.xlsx
> It gets even worse when rebalancing starts...

Cool, it met my thought. I guess the only way to lighten memory
problem is solve this for each heavy memory allocation use case.

>
> With best regards / Pozdrawiam
> Piotr Dałek



-- 
Best Regards,

Wheat
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to