From: Peter Enderborg <[email protected]> The count and scan can be separated in time, and there is a fair chance that all work is already done when the scan starts, which might in turn result in a needless retry. This commit therefore avoids this retry by returning SHRINK_STOP.
Reviewed-by: Uladzislau Rezki (Sony) <[email protected]> Signed-off-by: Peter Enderborg <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 67912ad..0806762 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3314,7 +3314,7 @@ kfree_rcu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) break; } - return freed; + return freed == 0 ? SHRINK_STOP : freed; } static struct shrinker kfree_rcu_shrinker = { -- 2.9.5

