Am 12.03.2020 schrieb XuYun:
> We got the same problem today while we were adding memory to OSD nodes,
> and it decreased monitor’s performance a lot. I noticed that the db kept
> increasing after an OSD is shutdown, so I guess that it is caused by the
> warning reports collected by mgr insights module. When I disabled the mgr
> insights module, the db size reduced to 1xx MB from 3x GB.
>
Works like a charm, disk-usage is where I expected it to be. Thanks.
Should we ever meet in person: I owe you a drink.
>
> > 2020年3月12日 下午2:44,Hartwig Hauschild <[email protected]> 写道:
> >
> > Am 10.03.2020 schrieb Wido den Hollander:
> >>
> >>
> >> On 3/10/20 10:48 AM, Hartwig Hauschild wrote:
> >>> Hi,
> >>>
> >>> I've done a bit more testing ...
> >>>
> >>> Am 05.03.2020 schrieb Hartwig Hauschild:
> >>>> Hi,
> >>>>
> > [ snipped ]
> >>> I've read somewhere in the docs that I should provide ample space (tens of
> >>> GB) for the store.db, found on the ML and Bugtracker that ~100GB might not
> >>> be a bad idea and that large clusters may require space on order of
> >>> magnitude greater.
> >>> Is there some sort of formula I can use to approximate the space required?
> >>
> >> I don't know about a formula, but make sure you have enough space. MONs
> >> are dedicated nodes in most production environments, so I usually
> >> install a 400 ~ 1000GB SSD just to make sure they don't run out of space.
> >>
> > That seems fair.
> >>>
> >>> Also: is the db supposed to grow this fast in Nautilus when it did not do
> >>> that in Luminous? Is that behaviour configurable somewhere?
> >>>
> >>
> >> The MONs need to cache the OSDMaps when not all PGs are active+clean
> >> thus their database grows.
> >>
> >> You can compact RocksDB in the meantime, but it won't last for ever.
> >>
> >> Just make sure the MONs have enough space.
> >>
> > Do you happen to know if that behaved differently in previous releases? I'm
> > just asking because I have not found anything about this yet and may need to
> > explain that it's different now.
> >
> > --
> > Cheers,
> > Hardy
> > _______________________________________________
> > ceph-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
>
--
Cheers,
Hardy
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]