On Wed, 2017-10-11 at 11:21 -0700, Alexei Starovoitov wrote:
> On Wed, Oct 11, 2017 at 06:31:45AM -0700, Eric Dumazet wrote:
> > On Tue, 2017-10-10 at 19:56 -0700, Alexei Starovoitov wrote:
> > 
> > > actually we hit that too for completely different tracing use case.
> > > Indeed would be good to generate socket cookie unconditionally
> > > for all sockets. I don't think there is any harm.
> > > 
> > 
> > Simply call sock_gen_cookie() when needed.
> > 
> > If a tracepoint needs the cookie and the cookie was not yet generated,
> > it will be generated at this point.
> 
> we already have bpf_get_socket_cookie() that will call it,
> but this helper is for bpf socket filters, clsact and other
> networking related prog types, whereas all of tracing is
> read-only and side-effect-free, so we cannot use
> bpf_get_socket_cookie() there.
> Hence for tracing in kprobe-bpf we use raw sk pointer
> as map key and full tuple when passing the socket info to user
> space. If we could use socket cookie vs full tuple it would
> make a nice difference.

Since this sock_gen_cookie() is lock-free and IRQ ready, it should be
not be a problem to pretend it works with a const socket.

I am a bit unsure about revealing in socket cookie a precise count of
sockets created on a netns. Some attackers might use this in a side
channel attack.



Reply via email to