On Tue, 20 Apr 2021 08:33:43 -0700
Alexei Starovoitov wrote:
> I don't see how you can do it without BTF.
I agree.
> The mass-attach feature should prepare generic 6 or so arguments
> from all functions it attached to.
> On x86-64 it's trivial because 6 regs are the same.
> On arm64 is now mo
On Mon, 19 Apr 2021 22:51:46 +0200
Jiri Olsa wrote:
> now, it looks like the fgraph_ops entry callback does not have access
> to registers.. once we have that, we could store arguments for the exit
> callback and have all in place.. could this be added? ;-)
Sure. The only problem is that we need
On Sat, 17 Apr 2021 00:03:04 +0900
Masami Hiramatsu wrote:
> > Anyway, IIRC, Masami wasn't sure that the full regs was ever needed for the
> > return (who cares about the registers on return, except for the return
> > value?)
>
> I think kretprobe and ftrace are for a bit different usage. kret
On Thu, 15 Apr 2021 23:49:43 +0200
Jiri Olsa wrote:
> right, I quickly checked on that and it looks exactly like
> the thing we need
>
> I'll try to rebase that on the current code and try to use
> it with the bpf ftrace probe to see how it fits
>
> any chance you could plan on reposting it? ;
[
Added Masami, as I didn't realize he wasn't on Cc. He's the maintainer of
kretprobes.
Masami, you may want to use lore.kernel.org to read the history of this
thread.
]
On Thu, 15 Apr 2021 13:45:06 -0700
Andrii Nakryiko wrote:
> > I don't know how the BPF code does it, but if you are
On Thu, 15 Apr 2021 14:18:31 -0400
Steven Rostedt wrote:
> My last release of that code is here:
>
> https://lore.kernel.org/lkml/20190525031633.811342...@goodmis.org/
>
> It allows you to "reserve data" to pass from the caller to the return, and
> that could hold
On Thu, 15 Apr 2021 19:39:45 +0200
Jiri Olsa wrote:
> > I don't know how the BPF code does it, but if you are tracing the exit
> > of a function, I'm assuming that you hijack the return pointer and replace
> > it with a call to a trampoline that has access to the arguments. To do
>
> hi,
> it'
On Wed, 14 Apr 2021 15:46:49 -0700
Andrii Nakryiko wrote:
> On Wed, Apr 14, 2021 at 5:19 AM Jiri Olsa wrote:
> >
> > On Tue, Apr 13, 2021 at 06:04:05PM -0700, Andrii Nakryiko wrote:
> > > On Tue, Apr 13, 2021 at 7:57 AM Jiri Olsa wrote:
> > > >
> > > > hi,
> > > > sending another attempt on
On Tue, 13 Apr 2021 09:56:40 +0200
Dmitry Vyukov wrote:
> Thanks for looking into this.
> If this is not a kernel bug, then it must not use WARN_ON[_ONCE]. It
> makes the kernel untestable for both automated systems and humans:
>
> https://lwn.net/Articles/769365/
>
>
> Greg Kroah-Hartman rais
from inside lockdep,
> triggering tracing, which in turn calls RCU, which then uses
> lockdep_assert_irqs_disabled().
>
> Fixes: a21ee6055c30 ("lockdep: Change hardirq{s_enabled,_context}
> to per-cpu variables")
> Reported-by: Steven Rostedt
> Signed-off-by: Peter Zijlstra (Intel)
> Signed-off-by: Ingo Molnar
On Thu, 11 Mar 2021 17:44:15 +0800
Tony Lu wrote:
> ---
> include/trace/events/net.h| 42 +--
> include/trace/events/qdisc.h | 4 ++--
> include/trace/events/sunrpc.h | 4 ++--
> include/trace/events/tcp.h| 2 +-
> 4 files changed, 26 insertions(+), 26
On Wed, 10 Mar 2021 17:03:40 +0800
Tony Lu wrote:
> On Tue, Mar 09, 2021 at 12:40:11PM -0500, Steven Rostedt wrote:
> > The above shows 10 bytes wasted for this event.
> >
>
> I use pahole to read vmlinux.o directly with defconfig and
> CONFIG_DEBUG_INFO enab
On Wed, 10 Mar 2021 17:03:40 +0800
Tony Lu wrote:
> I use pahole to read vmlinux.o directly with defconfig and
> CONFIG_DEBUG_INFO enabled, the result shows 22 structs prefixed with
> trace_event_raw_ that have at least one hole.
I was thinking of pahole too ;-)
But the information can also be
On Tue, 9 Mar 2021 13:17:23 -0700
David Ahern wrote:
> On 3/9/21 1:02 PM, Steven Rostedt wrote:
> > On Tue, 9 Mar 2021 12:53:37 -0700
> > David Ahern wrote:
> >
> >> Changing the order of the fields will impact any bpf programs expecting
> >> the e
On Tue, 9 Mar 2021 12:53:37 -0700
David Ahern wrote:
> Changing the order of the fields will impact any bpf programs expecting
> the existing format
I thought bpf programs were not API. And why are they not parsing this
information? They have these offsets hard coded Why would they do that!
On Tue, 9 Mar 2021 12:43:50 +0800
Tony Lu wrote:
> There are lots of net namespaces on the host runs containers like k8s.
> It is very common to see the same interface names among different net
> namespaces, such as eth0. It is not possible to distinguish them without
> net namespace inode.
>
>
On Thu, 18 Feb 2021 08:42:15 +0300
Arseny Krasnov wrote:
Not sure if this was pulled in yet, but I do have a small issue with this
patch.
> @@ -69,14 +82,19 @@ TRACE_EVENT(virtio_transport_alloc_pkt,
> __entry->type = type;
> __entry->op = op;
> __entry-
On Wed, 10 Feb 2021 19:23:38 +0100
Eric Dumazet wrote:
> >> The problem here is a kmalloc failure injection into
> >> tracepoint_probe_unregister, but the error is ignored -- so the bpf
> >> program is freed even though the tracepoint is never unregistered.
> >>
> >> I have a first pass at a patc
On Fri, 5 Feb 2021 17:45:43 +0530
Bhaskar Chowdhury wrote:
> s/fucked/messed/
Rules about obscene language is about new code coming into the kernel. We
don't want to encourage people to do sweeping changes of existing code. It
just causes unwanted churn, and adds noise to the git logs.
Sorry,
On Wed, 3 Feb 2021 12:57:24 -0500 (EST)
Mathieu Desnoyers wrote:
> > @@ -147,14 +154,34 @@ func_add(struct tracepoint_func **funcs, struct
> > tracepoint_func *tp_func,
> > if (old[nr_probes].func == tp_func->func &&
> > old[nr_probes].data == tp_func->
On Tue, 2 Feb 2021 19:09:44 -0800
Ivan Babrou wrote:
> On Thu, Jan 28, 2021 at 7:35 PM Ivan Babrou wrote:
> >
> > Hello,
> >
> > We've noticed the following regression in Linux 5.10 branch:
> >
> > [ 128.367231][C0]
> > ==
> >
On Wed, 3 Feb 2021 18:09:27 +0100
Peter Zijlstra wrote:
> On Wed, Feb 03, 2021 at 11:05:31AM -0500, Steven Rostedt wrote:
> > + if (new) {
> > + for (i = 0; old[i].func; i++)
> > + if ((old[i].
From: "Steven Rostedt (VMware)"
The list of tracepoint callbacks is managed by an array that is protected
by RCU. To update this array, a new array is allocated, the updates are
copied over to the new array, and then the list of functions for the
tracepoint is switched over to the
On Wed, 27 Jan 2021 18:08:34 +1100
Alexey Kardashevskiy wrote:
>
> I am running syzkaller and the kernel keeps crashing in
> __traceiter_##_name. This patch makes these crashes happen lot less
I have another solution to the above issue. But I'm now concerned with what
you write below.
> ofte
etworking pulls.
>
> And _if_ the networking people feel that my one-liner was the proper
> fix, you can use it and add my sign-off if you want to, but it really
> was more of a "this is the quick ugly fix for testing" rather than
> anything else.
>
Please add:
Li
On Wed, 6 Jan 2021 17:03:48 -0800
Linus Torvalds wrote:
> --- a/include/linux/skbuff.h
> +++ b/include/linux/skbuff.h
> @@ -366,7 +366,7 @@ static inline void skb_frag_size_sub(skb_frag_t *frag,
> int delta)
> static inline bool skb_frag_must_loop(struct page *p)
> {
> #if defined(CONFIG_HIGH
On Wed, 6 Jan 2021 17:03:48 -0800
Linus Torvalds wrote:
> (although I wonder how/why the heck you've enabled
> CC_OPTIMIZE_FOR_SIZE=y, which is what causes "memcpy()" to be done as
> that "rep movsb". I thought we disabled it because it's so bad on most
> cpus).
Why?
Because to test x86_32, I h
On Tue, 24 Nov 2020 11:22:03 +0800
Jason Wang wrote:
> Btw, have a quick search, there are several other drivers that uses tx
> lock in the tx NAPI.
tx NAPI is not the issue. The issue is that write_msg() (in netconsole.c)
calls this polling logic with the target_list_lock held.
Are those othe
On Mon, 23 Nov 2020 10:52:52 -0800
Jakub Kicinski wrote:
> On Mon, 23 Nov 2020 09:31:28 -0500 Steven Rostedt wrote:
> > On Mon, 23 Nov 2020 13:08:55 +0200
> > Leon Romanovsky wrote:
> >
> >
> > > [ 10.028024] Chain exists of:
> > > [ 10
On Mon, 23 Nov 2020 13:08:55 +0200
Leon Romanovsky wrote:
> [ 10.028024] Chain exists of:
> [ 10.028025] console_owner --> target_list_lock --> _xmit_ETHER#2
Note, the problem is that we have a location that grabs the xmit_lock while
holding target_list_lock (and possibly console_owner)
On Thu, 19 Nov 2020 09:04:57 -0800
Alexei Starovoitov wrote:
> On Thu, Nov 19, 2020 at 6:59 AM Steven Rostedt wrote:
> > Linux obviously
> > supports multiple architectures (more than any other OS), but it is pretty
> > stuck to gcc as a compiler (with LLVM just
On Thu, 19 Nov 2020 08:37:35 -0600
Segher Boessenkool wrote:
> > Note that we have a fairly extensive tradition of defining away UB with
> > language extentions, -fno-strict-overflow, -fno-strict-aliasing,
>
> These are options to make a large swath of not correct C programs
> compile (and oft
On Wed, 18 Nov 2020 13:48:37 -0600
Segher Boessenkool wrote:
> > With it_func being the func from the struct tracepoint_func, which is a
> > void pointer, it is typecast to the function that is defined by the
> > tracepoint. args is defined as the arguments that match the proto.
>
> If you hav
On Wed, 18 Nov 2020 11:46:02 -0800
Alexei Starovoitov wrote:
> On Wed, Nov 18, 2020 at 6:22 AM Steven Rostedt wrote:
> >
> > Thus, all functions will be non-variadic in these cases.
>
> That's not the only case where it will blow up.
> Try this on
On Wed, 18 Nov 2020 13:11:27 -0600
Segher Boessenkool wrote:
> Calling this via a different declared function type is undefined
> behaviour, but that is independent of how the function is *defined*.
> Your program can make ducks appear from your nose even if that function
> is never called, if yo
On Wed, 18 Nov 2020 13:58:23 -0500
Steven Rostedt wrote:
> an not worry about gcc
or LLVM, or whatever is used to build the kernel.
-- Steve
On Wed, 18 Nov 2020 19:31:50 +0100
Florian Weimer wrote:
> * Segher Boessenkool:
>
> > On Wed, Nov 18, 2020 at 12:17:30PM -0500, Steven Rostedt wrote:
> >> I could change the stub from (void) to () if that would be better.
> >
> > Don't? In a function
On Wed, 18 Nov 2020 08:50:37 -0800
Nick Desaulniers wrote:
> On Wed, Nov 18, 2020 at 5:23 AM Peter Zijlstra wrote:
> >
> > On Tue, Nov 17, 2020 at 03:34:51PM -0500, Steven Rostedt wrote:
> >
> > > > > Since all tracepoints callbacks have at least o
From: "Steven Rostedt (VMware)"
The list of tracepoint callbacks is managed by an array that is protected
by RCU. To update this array, a new array is allocated, the updates are
copied over to the new array, and then the list of functions for the
tracepoint is switched over to the
On Wed, 18 Nov 2020 14:59:29 +0100
Florian Weimer wrote:
> * Peter Zijlstra:
>
> > I think that as long as the function is completely empty (it never
> > touches any of the arguments) this should work in practise.
> >
> > That is:
> >
> > void tp_nop_func(void) { }
> >
> > can be used as an ar
[ Adding netdev as perhaps someone there knows ]
On Wed, 18 Nov 2020 12:09:59 +0800
Jason Wang wrote:
> > This CPU0 lock(_xmit_ETHER#2) -> hard IRQ -> lock(console_owner) is
> > basically
> > soft IRQ -> lock(_xmit_ETHER#2) -> hard IRQ -> printk()
> >
> > Then CPU1 spins on xmit, which is
On Wed, 18 Nov 2020 14:21:36 +0100
Peter Zijlstra wrote:
> I think that as long as the function is completely empty (it never
> touches any of the arguments) this should work in practise.
>
> That is:
>
> void tp_nop_func(void) { }
My original version (the OP of this thread) had this:
+stat
On Tue, 17 Nov 2020 20:54:24 -0800
Alexei Starovoitov wrote:
> > extern int
> > @@ -310,7 +312,12 @@ static inline struct tracepoint
> > *tracepoint_ptr_deref(tracepoint_ptr_t *p)
> > do {\
> > it_func =
From: "Steven Rostedt (VMware)"
The list of tracepoint callbacks is managed by an array that is protected
by RCU. To update this array, a new array is allocated, the updates are
copied over to the new array, and then the list of functions for the
tracepoint is switched over to the
On Tue, 17 Nov 2020 18:08:19 -0500 (EST)
Mathieu Desnoyers wrote:
> Because of this end-of-loop condition ^
> which is also testing for a NULL func. So if we reach a stub, we end up
> stopping
> iteration and not firing the following tracepoint probes.
Ah right. OK, since it's looking like we'r
On Tue, 17 Nov 2020 16:42:44 -0800
Matt Mullins wrote:
> > Indeed with a stub function, I don't see any need for READ_ONCE/WRITE_ONCE.
> >
>
> I'm not sure if this is a practical issue, but without WRITE_ONCE, can't
> the write be torn? A racing __traceiter_ could potentially see a
> half-m
On Tue, 17 Nov 2020 13:33:42 -0800
Kees Cook wrote:
> As I think got discussed in the thread, what you had here wouldn't work
> in a CFI build if the function prototype of the call site and the
> function don't match. (Though I can't tell if .func() is ever called?)
>
> i.e. .func's prototype mu
On Tue, 17 Nov 2020 16:22:23 -0500 (EST)
Mathieu Desnoyers wrote:
> If we don't call the stub, then there is no point in having the stub at
> all, and we should just compare to a constant value, e.g. 0x1UL. As far
> as I can recall, comparing with a small immediate constant is more efficient
> th
On Tue, 17 Nov 2020 15:34:51 -0500
Steven Rostedt wrote:
> On Tue, 17 Nov 2020 14:47:20 -0500 (EST)
> Mathieu Desnoyers wrote:
>
> > There seems to be more effect on the data size: adding the "stub_func" field
> > in struct tracepoint adds 8320 bytes of data
On Tue, 17 Nov 2020 14:47:20 -0500 (EST)
Mathieu Desnoyers wrote:
> There seems to be more effect on the data size: adding the "stub_func" field
> in struct tracepoint adds 8320 bytes of data to my vmlinux. But considering
> the layout of struct tracepoint:
>
> struct tracepoint {
> cons
On Tue, 17 Nov 2020 14:15:10 -0500 (EST)
Mathieu Desnoyers wrote:
> diff --git a/include/linux/tracepoint-defs.h b/include/linux/tracepoint-defs.h
> index e7c2276be33e..e0351bb0b140 100644
> --- a/include/linux/tracepoint-defs.h
> +++ b/include/linux/tracepoint-defs.h
> @@ -38,6 +38,7 @@ struct
On Mon, 16 Nov 2020 17:51:07 -0500
Steven Rostedt wrote:
> [ Kees, I added you because you tend to know about these things.
> Is it OK to assign a void func(void) that doesn't do anything and returns
> nothing to a function pointer that could be call with parameters? We need
&g
[ Kees, I added you because you tend to know about these things.
Is it OK to assign a void func(void) that doesn't do anything and returns
nothing to a function pointer that could be call with parameters? We need
to add stubs for tracepoints when we fail to allocate a new array on
removal
On Mon, 16 Nov 2020 16:34:41 -0500 (EST)
Mathieu Desnoyers wrote:
> - On Nov 16, 2020, at 4:02 PM, rostedt rost...@goodmis.org wrote:
>
> > On Mon, 16 Nov 2020 15:44:37 -0500
> > Steven Rostedt wrote:
> >
> >> If you use a stub function, it shouldn
On Mon, 16 Nov 2020 16:02:18 -0500
Steven Rostedt wrote:
> + if (new) {
> + for (i = 0; old[i].func; i++)
> + if (old[i].func != tp_func->func
> + || old[i].data !
On Mon, 16 Nov 2020 15:44:37 -0500
Steven Rostedt wrote:
> If you use a stub function, it shouldn't affect anything. And the worse
> that would happen is that you have a slight overhead of calling the stub
> until you can properly remove the callback.
Something like this:
(haven
On Mon, 16 Nov 2020 15:37:27 -0500 (EST)
Mathieu Desnoyers wrote:
> >
> > Mathieu,
> >
> > Can't we do something that would still allow to unregister a probe even if
> > a new probe array fails to allocate? We could kick off a irq work to try to
> > clean up the probe at a later time, but still,
On Sat, 14 Nov 2020 21:52:55 -0800
Matt Mullins wrote:
> bpf_link_free is always called in process context, including from a
> workqueue and from __fput. Neither of these have the ability to
> propagate an -ENOMEM to the caller.
>
Hmm, I think the real fix is to not have unregistering a tracep
On Mon, 26 Oct 2020 21:30:14 -0700
Alexei Starovoitov wrote:
> > Direct calls wasn't added so that bpf and ftrace could co-exist, it was
> > that for certain cases, bpf wanted a faster way to access arguments,
> > because it still worked with ftrace, but the saving of regs was too
> > strenuous.
On Fri, 23 Oct 2020 13:03:22 -0700
Andrii Nakryiko wrote:
> Basically, maybe ftrace subsystem could provide a set of APIs to
> prepare a set of functions to attach to. Then BPF subsystem would just
> do what it does today, except instead of attaching to a specific
> kernel function, it would atta
On Fri, 23 Oct 2020 08:09:32 +0200
Jiri Olsa wrote:
> >
> > The below is a quick proof of concept patch I whipped up. It will always
> > save 6 arguments, so if BPF is really interested in just saving the bare
> > minimum of arguments before calling, it can still use direct. But if you
> > are go
On Thu, 22 Oct 2020 12:21:50 -0400
Steven Rostedt wrote:
> On Thu, 22 Oct 2020 10:42:05 -0400
> Steven Rostedt wrote:
>
> > I'd like to see how batch functions will work. I guess I need to start
> > looking at the bpf trampoline, to see if we can modify the ftrac
On Thu, 22 Oct 2020 10:42:05 -0400
Steven Rostedt wrote:
> I'd like to see how batch functions will work. I guess I need to start
> looking at the bpf trampoline, to see if we can modify the ftrace
> trampoline to have a quick access to parameters. It would be much more
> ben
On Thu, 22 Oct 2020 16:11:54 +0200
Jiri Olsa wrote:
> I understand direct calls as a way that bpf trampolines and ftrace can
> co-exist together - ebpf trampolines need that functionality of accessing
> parameters of a function as if it was called directly and at the same
> point we need to be ab
On Thu, 22 Oct 2020 10:21:22 +0200
Jiri Olsa wrote:
> hi,
> this patchset tries to speed up the attach time for trampolines
> and make bpftrace faster for wildcard use cases like:
>
> # bpftrace -ve "kfunc:__x64_sys_s* { printf("test\n"); }"
>
> Profiles show mostly ftrace backend, because we
let tracing or anything that needs rcu in
those locations.
But for your patch:
Acked-by: Steven Rostedt (VMware)
-- Steve
On Fri, 21 Aug 2020 11:38:31 -0400
Steven Rostedt wrote:
> > > At some point we're going to have to introduce noinstr to idle as well.
> > > But until that time this should indeed cure things.
> >
What the above means, is that ideally we will get rid of all
On Fri, 21 Aug 2020 08:06:49 -0700
Eric Dumazet wrote:
> On Fri, Aug 21, 2020 at 1:59 AM wrote:
> >
> > On Fri, Aug 21, 2020 at 08:30:43AM +0200, Marco Elver wrote:
> > > With KCSAN enabled, prandom_u32() may be called from any context,
> > > including idle CPUs.
> > >
> > > Therefore, switch
On Wed, 19 Aug 2020 10:29:09 -0700
Jesse Brandeburg wrote:
> What I don't understand in the stack trace is this:
> > > [ 107.654661] Call Trace:
> > > [ 107.657735]
> > > [ 107.663155] ? ftrace_graph_caller+0xc0/0xc0
> > > [ 107.667929] call_timer_fn+0x3b/0x1b0
> > > [ 107.672238] ? ne
On Wed, 19 Aug 2020 17:01:06 +0530
Naresh Kamboju wrote:
> kernel warning noticed on x86_64 while running LTP tracing ftrace-stress-test
> case. started noticing on the stable-rc linux-5.8.y branch.
>
> This device booted with KASAN config and DYNAMIC tracing configs and more.
> This reported is
On Thu, 6 Aug 2020 00:27:13 +0800
Muchun Song wrote:
> We found a case of kernel panic on our server. The stack trace is as
> follows(omit some irrelevant information):
>
> BUG: kernel NULL pointer dereference, address: 0080
> RIP: 0010:kprobe_ftrace_handler+0x5e/0xe0
> RSP: 0
On Thu, 6 Aug 2020 00:59:41 +0800
Muchun Song wrote:
> > The original patch has already been pulled into the queue and tested.
> > Please make a new patch that adds this update, as if your original
> > patch has already been accepted.
>
> Will do, thanks!
Also, if you can, add the following:
On Mon, 3 Aug 2020 23:50:42 +0900
Masami Hiramatsu wrote:
> Nice catch!
>
> Acked-by: Masami Hiramatsu
>
> Fixes: ae6aa16fdc16 ("kprobes: introduce ftrace based optimization")
> Cc: sta...@vger.kernel.org
Thanks Masami,
I'll add this to my queue for the merge window.
-- Steve
started by Steven (see Link) and finished by Alan; added
> Steven's Signed-off-by with his permission.
>
> Link: https://lore.kernel.org/r/20200628194334.6238b...@oasis.local.home
> Signed-off-by: Steven Rostedt (VMware)
> Signed-off-by: Alan Maguire
> ---
> kernel
Cc'd Frederic and Peter.
-- Steve
On Sun, 28 Jun 2020 04:29:17 -0700
syzbot wrote:
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:7a64135f libbpf: Adjust SEC short cut for expected attach ..
> git tree: bpf
> console output: https://syzkaller.appspot.com/x/log.
On Sat, 9 May 2020 18:01:51 -0700
Shannon Nelson wrote:
> If the driver is able to detect that the device firmware has come back
> alive, through user intervention or whatever, should there be a way to
> "untaint" the kernel? Or would you expect it to remain tainted?
The only way to untaint a
On Thu, 3 Oct 2019 09:18:40 -0700
Alexei Starovoitov wrote:
> I think dropping last events is just as bad. Is there a mode to overwrite old
> and keep the last N (like perf does) ?
Well, it drops it by pages. Thus you should always have the last page
of events.
> Peter Wu brought this issue to
On Wed, 2 Oct 2019 17:18:21 +
Alexei Starovoitov wrote:
> >> It's an interesting idea, but I don't think it can work.
> >> Please see bpf_trace_printk implementation in kernel/trace/bpf_trace.c
> >> It's a lot more than string printing.
> >
> > Well, trace_printk() is just string printing.
On Tue, 1 Oct 2019 22:18:18 +
Alexei Starovoitov wrote:
> > And then you can just format the string from the bpf_trace_printk()
> > into msg, and then have:
> >
> > trace_bpf_print(msg);
>
> It's an interesting idea, but I don't think it can work.
> Please see bpf_trace_printk impleme
On Mon, 30 Sep 2019 18:22:28 -0700
Alexei Starovoitov wrote:
> tracefs is a file system, so clearly file based acls are much better fit
> for all tracefs operations.
> But that is not the case for ftrace overall.
> bpf_trace_printk() calls trace_printk() that dumps into trace pipe.
> Technically
On Wed, 28 Aug 2019 21:07:24 -0700
Alexei Starovoitov wrote:
> >
> > This won’t make me much more comfortable, since CAP_BPF lets it do an
> > ever-growing set of nasty things. I’d much rather one or both of two things
> > happen:
> >
> > 1. Give it CAP_TRACING only. It can leak my data, but i
On Thu, 29 Aug 2019 10:23:10 -0700
Alexei Starovoitov wrote:
> > CAP_TRACE_KERNEL: Use all of perf, ftrace, kprobe, etc.
> >
> > CAP_TRACE_USER: Use all of perf with scope limited to user mode and
> > uprobes.
>
> imo that makes little sense from security pov, since
> such CAP_TRACE_KERNEL (
On Thu, 29 Aug 2019 10:19:24 -0700
Alexei Starovoitov wrote:
> On Thu, Aug 29, 2019 at 09:34:34AM -0400, Steven Rostedt wrote:
> >
> > As the above seems to favor the idea of CAP_TRACING allowing write
> > access to tracefs, should we have a CAP_TRACING_RO for just read a
On Wed, 28 Aug 2019 15:08:28 -0700
Alexei Starovoitov wrote:
> On Wed, Aug 28, 2019 at 09:14:21AM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 27, 2019 at 04:01:08PM -0700, Andy Lutomirski wrote:
> >
> > > > Tracing:
> > > >
> > > > CAP_BPF and perf_paranoid_tracepoint_raw() (which is
> > > >
On Tue, 27 Aug 2019 18:12:59 -0700
Andy Lutomirski wrote:
> Too many slashes :/
>
> A group could work for v1. Maybe all the tools should get updated to use
> this path?
trace-cmd now does. In fact, if run as root, it will first check if
tracefs is mounted, and if not, it will try to mount it
On Tue, 27 Aug 2019 16:34:47 -0700
Andy Lutomirski wrote:
> > > CAP_TRACING does not override normal permissions on sysfs or debugfs.
> > > This means that, unless a new interface for programming kprobes and
> > > such is added, it does not directly allow use of kprobes.
> >
> > kprobes can be
On Tue, 27 Aug 2019 16:01:08 -0700
Andy Lutomirski wrote:
> [adding some security and tracing folks to cc]
>
> On Tue, Aug 27, 2019 at 1:52 PM Alexei Starovoitov wrote:
> >
> > Introduce CAP_BPF that allows loading all types of BPF programs,
> > create most map types, load BTF, iterate programs
On Fri, 22 Feb 2019 18:28:53 -0800
Alexei Starovoitov wrote:
> First we introduce bpf_probe_kernel_read and bpf_probe_user_read and
> introduce clang/gcc tooling to catch the mistakes.
> Going over this 400+ places and manually grepping kernel sources
> for __user keyword is not a great proposal
On Fri, 22 Feb 2019 11:34:58 -0800
Alexei Starovoitov wrote:
> so you're saying you will break existing kprobe scripts?
Yes we may.
> I don't think it's a good idea.
> It's not acceptable to break bpf_probe_read uapi.
Then you may need to add more code to determine if the address is user
space
On Fri, 22 Feb 2019 11:27:05 -0800
Alexei Starovoitov wrote:
> On Fri, Feb 22, 2019 at 09:43:14AM -0800, Linus Torvalds wrote:
> >
> > Then we should still probably fix up "__probe_kernel_read()" to not
> > allow user accesses. The easiest way to do that is actually likely to
> > use the "unsafe
On Thu, 21 Feb 2019 00:49:40 -0500
"Joel Fernandes (Google)" wrote:
> Recently I added an RCU annotation check to rcu_assign_pointer(). All
> pointers assigned to RCU protected data are to be annotated with __rcu
> inorder to be able to use rcu_assign_pointer() similar to checks in
> other RCU AP
On Wed, 19 Dec 2018 17:40:49 +0100
Jesper Dangaard Brouer wrote:
> My napi_monitor use-case is not a slow-path event, even-though in
> optimal cases we should handle 64 packets per tracepoint invocation,
> but I'm using this for 100G NICs with >20Mpps. And I mostly use the
> tool when something l
On Wed, 19 Dec 2018 08:36:43 +0100
Jesper Dangaard Brouer wrote:
> > +TRACE_EVENT(net_dev_notifier_entry,
> > +
> > + TP_PROTO(const struct netdev_notifier_info *info, unsigned long val),
> > +
> > + TP_ARGS(info, val),
> > +
> > + TP_STRUCT__entry(
> > + __string( name,
On Wed, 12 Dec 2018 19:05:53 +0100
Peter Zijlstra wrote:
> On Wed, Dec 12, 2018 at 05:09:17PM +, Song Liu wrote:
> > > And while this tracks the bpf kallsyms, it does not do all kallsyms.
> > >
> > > Oooh, I see the problem, everybody is doing their own custom
> > > kallsym_{add,del}()
On Fri, 30 Nov 2018 12:55:21 -0800
Jarkko Sakkinen wrote:
> On Fri, Nov 30, 2018 at 11:56:52AM -0800, Davidlohr Bueso wrote:
> > On Fri, 30 Nov 2018, Kees Cook wrote:
> >
> > > On Fri, Nov 30, 2018 at 11:27 AM Jarkko Sakkinen
> > > wrote:
> > > >
> > > > In order to comply with the CoC, re
On Fri, 30 Nov 2018 20:39:01 +
Abuse wrote:
> On Friday, 30 November 2018 20:35:07 GMT David Miller wrote:
> > From: Jens Axboe
> > Date: Fri, 30 Nov 2018 13:12:26 -0700
> >
> > > On 11/30/18 12:56 PM, Davidlohr Bueso wrote:
> > >> On Fri, 30 Nov 2018, Kees Cook wrote:
> > >>
> > >>>
On Mon, 12 Nov 2018 15:46:53 -0500 (EST)
Mathieu Desnoyers wrote:
> I also notice that in two cases, a "gro_result_t" is implicitly cast
> to "int". I usually frown upon this kind of stuff, because it's asking
> for trouble if gro_result_t typedef to something else than "int" in the
> future.
>
On Mon, 12 Nov 2018 15:20:55 -0500 (EST)
Mathieu Desnoyers wrote:
>
> Hrm, looking at this again, I notice that there is a single DEFINE_EVENT
> using net_dev_template_simple.
>
> We could simply turn netif_receive_skb_list_exit into a TRACE_EVENT(),
> remove the net_dev_template_simple, and re
nt is the result of the packet
> reception or a simple coincidence after further processing by the
> thread.
>
> Signed-off-by: Geneviève Bastien
> CC: Mathieu Desnoyers
> CC: Steven Rostedt
> CC: Ingo Molnar
> CC: David S. Miller
> ---
> Changes in v2:
> - Add t
nt is the result of the packet
> reception or a simple coincidence after further processing by the
> thread.
>
> Signed-off-by: Geneviève Bastien
> CC: Mathieu Desnoyers
> CC: Steven Rostedt
> CC: Ingo Molnar
> CC: David S. Miller
> ---
> include/trace/events/net.h |
1 - 100 of 265 matches
Mail list logo