On 11/08/2017 05:21 AM, David Laight wrote: > From: Sagi Grimberg >> Sent: 08 November 2017 07:28 > ... >>> Why would you give the user a knob to destroy what you carefully optimized? >> >> Well, looks like someone relies on this knob, the question is if he is >> doing something better for his workload. I don't know, its really up to >> the user to say. > > Maybe the user wants to ensure that nothing except some very specific > processing happens on some (or most) of the cpu cores. > > If the expected overall ethernet data rate isn't exceptionally large > is there any reason to allocate a queue (etc) for every cpu.
There are numerous valid reasons to be able to set the affinity, for both nics and block drivers. It's great that the kernel has a predefined layout that works well, but users do need the flexibility to be able to reconfigure affinities, to suit their needs. For the particular mlx5 case, I'm actually not sure how the FB configuration differs from the in-kernel stuff. I'll take a look at that. It may just be that the configuration exists because the code used to be driver private and frequently changed, setting it at bootup to a known good configuration helped eliminate problems when upgrading kernels. I also remember some cases of removing CPU0 from the mask. But that particular case is completely orthogonal to whether or not we should allow the user to reconfigure. The answer to that is clearly YES, and we should ensure that this is possible. -- Jens Axboe