On 01/24/2019 06:34 PM, Alexei Starovoitov wrote:
> On Thu, Jan 24, 2019 at 06:29:55PM -0800, Eric Dumazet wrote:
>>
>>
>> On 01/24/2019 03:58 PM, Alexei Starovoitov wrote:
>>> On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote:
>>
>>>> and from NMI ...
>>>
>>> progs are not preemptable and map syscall accessors have bpf_prog_active 
>>> counters.
>>> So nmi/kprobe progs will not be running when syscall is running.
>>> Hence dead lock is not possible and irq_save is not needed.
>>
>>
>> Speaking of NMI, how pcpu_freelist_push() and pop() can actually work ?
>>
>> It seems bpf_get_stackid() can be called from NMI, and lockdep seems to 
>> complain loudly
> 
> it's a known false positive.
> https://lkml.org/lkml/2018/7/25/756
> and the same answer as before:
> we're not going to penalize performance to shut up false positive.
> 

As far as lockdep is concerned, I do not believe we care about performance.

How can we remove this false positive, so that lockdep stays alive even after 
running bpf  test_progs ?


Let see if we understood this well.

1. create perf event PERF_TYPE_HARDWARE:PERF_COUNT_HW_CPU_CYCLES
2. attach bpf probram to this event 
3. since that's a hw event, the bpf program is executed in NMI context
4. the bpf program calls bpf_get_stackid to record the trace in a bpf map
5. bpf_get_stackid calls pcpu_freelist_pop and pcpu_freelist_push from NMI
6. userspace calls sys_bpf(bpf_map_lookup_elem) which calls bpf_stackmap_copy 
which can call pcpu_freelist_push


It seems pcpu_freelist_pop and pcpu_freelist_push are not NMI safe,
so what prevents bad things to happen ?

Thanks.

Reply via email to