On Sat, Apr 12, 2025 at 04:30:24AM +0300, Jarkko Sakkinen wrote:
> On Fri, Apr 11, 2025 at 11:37:25PM +0300, Jarkko Sakkinen wrote:
> > On Fri, Apr 11, 2025 at 04:59:11PM +0100, David Howells wrote:
> > > Jarkko Sakkinen <jar...@kernel.org> wrote:
> > > 
> > > > +       spin_lock_irqsave(&key_graveyard_lock, flags);
> > > > +       list_splice_init(&key_graveyard, &graveyard);
> > > > +       spin_unlock_irqrestore(&key_graveyard_lock, flags);
> > > 
> > > I would wrap this bit in a check to see if key_graveyard is empty so that 
> > > we
> > > can avoid disabling irqs and taking the lock if the graveyard is empty.
> > 
> > Can do, and does make sense.
> > 
> > > 
> > > > +               if (!refcount_inc_not_zero(&key->usage)) {
> > > 
> > > Sorry, but eww.  You're going to wangle the refcount twice on every key 
> > > on the
> > > system every time the gc does a pass.  Further, in some cases 
> > > inc_not_zero is
> > > not the fastest op in the world.
> > 
> > One could alternatively "test_bit(KEY_FLAG_FINAL_PUT, &key->flags)) &&
> > !refcount_inc_not_zero(&key->usage))" without mb() on either side and
> 
> Refactoring the changes to key_put() would be (draft):

I'll post a fresh patch set later :-) Deeply realized how this does not
make sense as it is. So yeah, it'll be a patch set.

One change that would IMHO make sense would be

diff --git a/security/keys/key.c b/security/keys/key.c
index 7198cd2ac3a3..aecbd624612d 100644
--- a/security/keys/key.c
+++ b/security/keys/key.c
@@ -656,10 +656,12 @@ void key_put(struct key *key)
                                spin_lock_irqsave(&key->user->lock, flags);
                                key->user->qnkeys--;
                                key->user->qnbytes -= key->quotalen;
+                               set_bit(KEY_FLAG_FINAL_PUT, &key->flags);
                                spin_unlock_irqrestore(&key->user->lock, flags);
+                       } else {
+                               set_bit(KEY_FLAG_FINAL_PUT, &key->flags);
+                               smp_mb(); /* key->user before FINAL_PUT set. */
                        }
-                       smp_mb(); /* key->user before FINAL_PUT set. */
-                       set_bit(KEY_FLAG_FINAL_PUT, &key->flags);
                        schedule_work(&key_gc_work);
                }
        }


I did not see anything obvious that would endanger anything and reduces
the number of smp_mb()'s. This is just on top of mainline ...

BR, Jarkko

Reply via email to