Hi Zakhar,
>
> I did try to play with various debug settings. The issue is that mons
> produce logs of all commands issued by clients, not just mgr. For example,
> an Openstack Cinder node asking for space it can use:
>
> Oct 9 07:59:01 ceph03 bash[4019]: debug 2023-10-09T07:59:01.303+0000
> 7f489da8f700 0 log_channel(audit) log [DBG] : from='client.?
> 10.208.1.11:0/3286277243 <http://10.208.1.11:0/3286277243> '
> entity='client.cinder' cmd=[{"prefix":"osd pool get-quota", "pool":
> "volumes-ssd", "format":"json"}]: dispatch
I am on a older version of ceph still so I am not sure if I even have these.
There is also an option ceph.conf to do client logging
[client]
#debug client = 5
>
> It is unclear which particular mon debug option out of many controls this
> particular type of debug. I tried searching for documentation of mon debug
> options to no avail.
>
Maybe there is something equal to this for logging?
ceph daemon mon.a perf schema|less
ceph daemon osd.0 perf schema|less
>
>
> Did you do something like this
>
> Getting keys with
> ceph daemon mon.a config show | grep debug_ | grep mgr
>
> ceph tell mon.* injectargs --$monk=0/0
>
> >
> > Any input from anyone, please?
> >
> > This part of Ceph is very poorly documented. Perhaps there's a
> better place
> > to ask this question? Please let me know.
> >
> > /Z
> >
> > On Sat, 7 Oct 2023 at 22:00, Zakhar Kirpichenko <[email protected]
> <mailto:[email protected]> > wrote:
> >
> > > Hi!
> > >
> > > I am still fighting excessive logging. I've reduced unnecessary
> logging
> > > from most components except for mon audit:
> https://pastebin.com/jjWvUEcQ
> > >
> > > How can I stop logging this particular type of messages?
> > >
> > > I would appreciate your help and advice.
> > >
> > > /Z
> > >
> > >
> > >> Thank you for your response, Igor.
> > >>
> > >> Currently debug_rocksdb is set to 4/5:
> > >>
> > >> # ceph config get osd debug_rocksdb
> > >> 4/5
> > >>
> > >> This setting seems to be default. Is my understanding correct
> that
> > you're
> > >> suggesting setting it to 3/5 or even 0/5? Would setting it to
> 0/5 have
> > any
> > >> negative effects on the cluster?
> > >>
> > >> /Z
> > >>
> > >> On Wed, 4 Oct 2023 at 21:23, Igor Fedotov <[email protected]
> <mailto:[email protected]> > wrote:
> > >>
> > >>> Hi Zakhar,
> > >>>
> > >>> do reduce rocksdb logging verbosity you might want to set
> debug_rocksdb
> > >>> to 3 (or 0).
> > >>>
> > >>> I presume it produces a significant part of the logging
> traffic.
> > >>>
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Igor
> > >>>
> > >>> On 04/10/2023 20:51, Zakhar Kirpichenko wrote:
> > >>> > Any input from anyone, please?
> > >>> >
> > >>> > On Tue, 19 Sept 2023 at 09:01, Zakhar Kirpichenko
> <[email protected] <mailto:[email protected]> >
> > >>> wrote:
> > >>> >
> > >>> >> Hi,
> > >>> >>
> > >>> >> Our Ceph 16.2.x cluster managed by cephadm is logging a lot
> of very
> > >>> >> detailed messages, Ceph logs alone on hosts with monitors
> and
> > several
> > >>> OSDs
> > >>> >> has already eaten through 50% of the endurance of the flash
> system
> > >>> drives
> > >>> >> over a couple of years.
> > >>> >>
> > >>> >> Cluster logging settings are default, and it seems that all
> daemons
> > >>> are
> > >>> >> writing lots and lots of debug information to the logs, such
> as for
> > >>> >> example: https://pastebin.com/ebZq8KZk (it's just a snippet,
> but
> > >>> there's
> > >>> >> lots and lots of various information).
> > >>> >>
> > >>> >> Is there a way to reduce the amount of logging and, for
> example,
> > >>> limit the
> > >>> >> logging to warnings or important messages so that it doesn't
> include
> > >>> every
> > >>> >> successful authentication attempt, compaction etc, etc, when
> the
> > >>> cluster is
> > >>> >> healthy and operating normally?
> > >>> >>
> > >>> >> I would very much appreciate your advice on this.
> > >>> >>
> > >>> >> Best regards,
> > >>> >> Zakhar
> > >>> >>
> > >>> >>
> > >>> >>
> > >>> > _______________________________________________
> > >>> > ceph-users mailing list -- [email protected] <mailto:ceph-
> [email protected]>
> > >>> > To unsubscribe send an email to [email protected]
> <mailto:[email protected]>
> > >>>
> > >>
> > _______________________________________________
> > ceph-users mailing list -- [email protected] <mailto:ceph-
> [email protected]>
> > To unsubscribe send an email to [email protected]
> <mailto:[email protected]>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]