On Thu, Jan 24, 2019 at 07:01:09PM +0100, Peter Zijlstra wrote:
> 
> Thanks for having kernel/locking people on Cc...
> 
> On Wed, Jan 23, 2019 at 08:13:55PM -0800, Alexei Starovoitov wrote:
> 
> > Implementation details:
> > - on !SMP bpf_spin_lock() becomes nop
> 
> Because no BPF program is preemptible? I don't see any assertions or
> even a comment that says this code is non-preemptible.
> 
> AFAICT some of the BPF_RUN_PROG things are under rcu_read_lock() only,
> which is not sufficient.
> 
> > - on architectures that don't support queued_spin_lock trivial lock is used.
> >   Note that arch_spin_lock cannot be used, since not all archs agree that
> >   zero == unlocked and sizeof(arch_spinlock_t) != sizeof(__u32).
> 
> I really don't much like direct usage of qspinlock; esp. not as a
> surprise.
> 
> Why does it matter if 0 means unlocked; that's what
> __ARCH_SPIN_LOCK_UNLOCKED is for.
> 
> I get the sizeof(__u32) thing, but why not key off of that?
> 
> > Next steps:
> > - allow bpf_spin_lock in other map types (like cgroup local storage)
> > - introduce BPF_F_LOCK flag for bpf_map_update() syscall and helper
> >   to request kernel to grab bpf_spin_lock before rewriting the value.
> >   That will serialize access to map elements.
> 
> So clearly this map stuff is shared between bpf proglets, otherwise
> there would not be a need for locking. But what happens if one is from
> task context and another from IRQ context?
> 
> I don't see a local_irq_save()/restore() anywhere. What avoids the
> trivial lock inversion?
> 

Also; what about BPF running from NMI context and using locks?

Reply via email to